00:00:00.000 Started by upstream project "autotest-nightly" build number 4287 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3650 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.219 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.220 The recommended git tool is: git 00:00:00.220 using credential 00000000-0000-0000-0000-000000000002 00:00:00.222 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.251 Fetching changes from the remote Git repository 00:00:00.252 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.276 Using shallow fetch with depth 1 00:00:00.276 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.276 > git --version # timeout=10 00:00:00.306 > git --version # 'git version 2.39.2' 00:00:00.306 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.325 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.325 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:08.297 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:08.308 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:08.321 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:08.321 > git config core.sparsecheckout # timeout=10 00:00:08.331 > git read-tree -mu HEAD # timeout=10 00:00:08.346 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:08.370 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:08.371 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:08.483 [Pipeline] Start of Pipeline 00:00:08.498 [Pipeline] library 00:00:08.500 Loading library shm_lib@master 00:00:08.500 Library shm_lib@master is cached. Copying from home. 00:00:08.517 [Pipeline] node 00:00:08.528 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:08.530 [Pipeline] { 00:00:08.539 [Pipeline] catchError 00:00:08.541 [Pipeline] { 00:00:08.551 [Pipeline] wrap 00:00:08.558 [Pipeline] { 00:00:08.566 [Pipeline] stage 00:00:08.568 [Pipeline] { (Prologue) 00:00:08.587 [Pipeline] echo 00:00:08.588 Node: VM-host-SM38 00:00:08.594 [Pipeline] cleanWs 00:00:08.606 [WS-CLEANUP] Deleting project workspace... 00:00:08.606 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.613 [WS-CLEANUP] done 00:00:08.819 [Pipeline] setCustomBuildProperty 00:00:08.886 [Pipeline] httpRequest 00:00:09.352 [Pipeline] echo 00:00:09.354 Sorcerer 10.211.164.20 is alive 00:00:09.363 [Pipeline] retry 00:00:09.365 [Pipeline] { 00:00:09.380 [Pipeline] httpRequest 00:00:09.386 HttpMethod: GET 00:00:09.386 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.387 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.404 Response Code: HTTP/1.1 200 OK 00:00:09.405 Success: Status code 200 is in the accepted range: 200,404 00:00:09.405 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:14.341 [Pipeline] } 00:00:14.360 [Pipeline] // retry 00:00:14.369 [Pipeline] sh 00:00:14.654 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:14.673 [Pipeline] httpRequest 00:00:15.056 [Pipeline] echo 00:00:15.059 Sorcerer 10.211.164.20 is alive 00:00:15.070 [Pipeline] retry 00:00:15.073 [Pipeline] { 00:00:15.090 [Pipeline] httpRequest 00:00:15.095 HttpMethod: GET 00:00:15.096 URL: http://10.211.164.20/packages/spdk_557f022f641abf567fb02704f67857eb8f6d9ff3.tar.gz 00:00:15.096 Sending request to url: http://10.211.164.20/packages/spdk_557f022f641abf567fb02704f67857eb8f6d9ff3.tar.gz 00:00:15.120 Response Code: HTTP/1.1 200 OK 00:00:15.121 Success: Status code 200 is in the accepted range: 200,404 00:00:15.121 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_557f022f641abf567fb02704f67857eb8f6d9ff3.tar.gz 00:01:27.418 [Pipeline] } 00:01:27.435 [Pipeline] // retry 00:01:27.442 [Pipeline] sh 00:01:27.727 + tar --no-same-owner -xf spdk_557f022f641abf567fb02704f67857eb8f6d9ff3.tar.gz 00:01:30.340 [Pipeline] sh 00:01:30.629 + git -C spdk log --oneline -n5 00:01:30.629 557f022f6 bdev: Change 1st parameter of bdev_bytes_to_blocks from bdev to desc 00:01:30.629 c0b2ac5c9 bdev: Change void to bdev_io pointer of parameter of _bdev_io_submit() 00:01:30.629 92fb22519 dif: dif_generate/verify_copy() supports NVMe PRACT = 1 and MD size > PI size 00:01:30.629 79daf868a dif: Add SPDK_DIF_FLAGS_NVME_PRACT for dif_generate/verify_copy() 00:01:30.629 431baf1b5 dif: Insert abstraction into dif_generate/verify_copy() for NVMe PRACT 00:01:30.661 [Pipeline] writeFile 00:01:30.691 [Pipeline] sh 00:01:30.973 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:30.986 [Pipeline] sh 00:01:31.269 + cat autorun-spdk.conf 00:01:31.269 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:31.269 SPDK_TEST_NVME=1 00:01:31.269 SPDK_TEST_FTL=1 00:01:31.269 SPDK_TEST_ISAL=1 00:01:31.269 SPDK_RUN_ASAN=1 00:01:31.269 SPDK_RUN_UBSAN=1 00:01:31.269 SPDK_TEST_XNVME=1 00:01:31.269 SPDK_TEST_NVME_FDP=1 00:01:31.269 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:31.278 RUN_NIGHTLY=1 00:01:31.280 [Pipeline] } 00:01:31.294 [Pipeline] // stage 00:01:31.310 [Pipeline] stage 00:01:31.312 [Pipeline] { (Run VM) 00:01:31.325 [Pipeline] sh 00:01:31.610 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:31.610 + echo 'Start stage prepare_nvme.sh' 00:01:31.610 Start stage prepare_nvme.sh 00:01:31.610 + [[ -n 10 ]] 00:01:31.610 + disk_prefix=ex10 00:01:31.610 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:31.610 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:31.610 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:31.610 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:31.610 ++ SPDK_TEST_NVME=1 00:01:31.610 ++ SPDK_TEST_FTL=1 00:01:31.610 ++ SPDK_TEST_ISAL=1 00:01:31.610 ++ SPDK_RUN_ASAN=1 00:01:31.610 ++ SPDK_RUN_UBSAN=1 00:01:31.610 ++ SPDK_TEST_XNVME=1 00:01:31.610 ++ SPDK_TEST_NVME_FDP=1 00:01:31.610 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:31.610 ++ RUN_NIGHTLY=1 00:01:31.610 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:31.610 + nvme_files=() 00:01:31.610 + declare -A nvme_files 00:01:31.610 + backend_dir=/var/lib/libvirt/images/backends 00:01:31.610 + nvme_files['nvme.img']=5G 00:01:31.610 + nvme_files['nvme-cmb.img']=5G 00:01:31.610 + nvme_files['nvme-multi0.img']=4G 00:01:31.610 + nvme_files['nvme-multi1.img']=4G 00:01:31.610 + nvme_files['nvme-multi2.img']=4G 00:01:31.610 + nvme_files['nvme-openstack.img']=8G 00:01:31.610 + nvme_files['nvme-zns.img']=5G 00:01:31.610 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:31.610 + (( SPDK_TEST_FTL == 1 )) 00:01:31.610 + nvme_files["nvme-ftl.img"]=6G 00:01:31.610 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:31.610 + nvme_files["nvme-fdp.img"]=1G 00:01:31.610 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:31.610 + for nvme in "${!nvme_files[@]}" 00:01:31.610 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-multi2.img -s 4G 00:01:31.610 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:31.610 + for nvme in "${!nvme_files[@]}" 00:01:31.610 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-ftl.img -s 6G 00:01:31.871 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:31.871 + for nvme in "${!nvme_files[@]}" 00:01:31.871 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-cmb.img -s 5G 00:01:31.871 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:31.871 + for nvme in "${!nvme_files[@]}" 00:01:31.871 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-openstack.img -s 8G 00:01:31.871 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:31.871 + for nvme in "${!nvme_files[@]}" 00:01:31.871 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-zns.img -s 5G 00:01:31.871 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:31.871 + for nvme in "${!nvme_files[@]}" 00:01:31.871 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-multi1.img -s 4G 00:01:31.871 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:31.871 + for nvme in "${!nvme_files[@]}" 00:01:31.871 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-multi0.img -s 4G 00:01:31.871 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:31.871 + for nvme in "${!nvme_files[@]}" 00:01:31.871 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-fdp.img -s 1G 00:01:32.133 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:32.133 + for nvme in "${!nvme_files[@]}" 00:01:32.133 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme.img -s 5G 00:01:32.133 Formatting '/var/lib/libvirt/images/backends/ex10-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:32.133 ++ sudo grep -rl ex10-nvme.img /etc/libvirt/qemu 00:01:32.395 + echo 'End stage prepare_nvme.sh' 00:01:32.395 End stage prepare_nvme.sh 00:01:32.410 [Pipeline] sh 00:01:32.698 + DISTRO=fedora39 00:01:32.698 + CPUS=10 00:01:32.698 + RAM=12288 00:01:32.698 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:32.698 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex10-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex10-nvme.img -b /var/lib/libvirt/images/backends/ex10-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex10-nvme-multi1.img:/var/lib/libvirt/images/backends/ex10-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex10-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:01:32.698 00:01:32.698 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:32.698 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:32.698 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:32.698 HELP=0 00:01:32.698 DRY_RUN=0 00:01:32.698 NVME_FILE=/var/lib/libvirt/images/backends/ex10-nvme-ftl.img,/var/lib/libvirt/images/backends/ex10-nvme.img,/var/lib/libvirt/images/backends/ex10-nvme-multi0.img,/var/lib/libvirt/images/backends/ex10-nvme-fdp.img, 00:01:32.698 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:32.698 NVME_AUTO_CREATE=0 00:01:32.698 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex10-nvme-multi1.img:/var/lib/libvirt/images/backends/ex10-nvme-multi2.img,, 00:01:32.698 NVME_CMB=,,,, 00:01:32.698 NVME_PMR=,,,, 00:01:32.698 NVME_ZNS=,,,, 00:01:32.698 NVME_MS=true,,,, 00:01:32.698 NVME_FDP=,,,on, 00:01:32.698 SPDK_VAGRANT_DISTRO=fedora39 00:01:32.698 SPDK_VAGRANT_VMCPU=10 00:01:32.698 SPDK_VAGRANT_VMRAM=12288 00:01:32.698 SPDK_VAGRANT_PROVIDER=libvirt 00:01:32.698 SPDK_VAGRANT_HTTP_PROXY= 00:01:32.698 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:32.698 SPDK_OPENSTACK_NETWORK=0 00:01:32.698 VAGRANT_PACKAGE_BOX=0 00:01:32.698 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:32.698 FORCE_DISTRO=true 00:01:32.698 VAGRANT_BOX_VERSION= 00:01:32.698 EXTRA_VAGRANTFILES= 00:01:32.698 NIC_MODEL=e1000 00:01:32.698 00:01:32.698 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:01:32.698 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:35.247 Bringing machine 'default' up with 'libvirt' provider... 00:01:35.509 ==> default: Creating image (snapshot of base box volume). 00:01:35.772 ==> default: Creating domain with the following settings... 00:01:35.772 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732126253_0c9e37bbd09ddf696d40 00:01:35.772 ==> default: -- Domain type: kvm 00:01:35.772 ==> default: -- Cpus: 10 00:01:35.772 ==> default: -- Feature: acpi 00:01:35.772 ==> default: -- Feature: apic 00:01:35.772 ==> default: -- Feature: pae 00:01:35.772 ==> default: -- Memory: 12288M 00:01:35.772 ==> default: -- Memory Backing: hugepages: 00:01:35.772 ==> default: -- Management MAC: 00:01:35.772 ==> default: -- Loader: 00:01:35.772 ==> default: -- Nvram: 00:01:35.772 ==> default: -- Base box: spdk/fedora39 00:01:35.772 ==> default: -- Storage pool: default 00:01:35.772 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732126253_0c9e37bbd09ddf696d40.img (20G) 00:01:35.772 ==> default: -- Volume Cache: default 00:01:35.772 ==> default: -- Kernel: 00:01:35.772 ==> default: -- Initrd: 00:01:35.772 ==> default: -- Graphics Type: vnc 00:01:35.772 ==> default: -- Graphics Port: -1 00:01:35.772 ==> default: -- Graphics IP: 127.0.0.1 00:01:35.772 ==> default: -- Graphics Password: Not defined 00:01:35.772 ==> default: -- Video Type: cirrus 00:01:35.772 ==> default: -- Video VRAM: 9216 00:01:35.772 ==> default: -- Sound Type: 00:01:35.772 ==> default: -- Keymap: en-us 00:01:35.772 ==> default: -- TPM Path: 00:01:35.772 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:35.772 ==> default: -- Command line args: 00:01:35.772 ==> default: -> value=-device, 00:01:35.772 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:35.772 ==> default: -> value=-drive, 00:01:35.772 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:35.772 ==> default: -> value=-device, 00:01:35.772 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:35.772 ==> default: -> value=-device, 00:01:35.772 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:35.772 ==> default: -> value=-drive, 00:01:35.772 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme.img,if=none,id=nvme-1-drive0, 00:01:35.772 ==> default: -> value=-device, 00:01:35.772 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:35.772 ==> default: -> value=-device, 00:01:35.772 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:35.772 ==> default: -> value=-drive, 00:01:35.772 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:35.772 ==> default: -> value=-device, 00:01:35.772 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:35.772 ==> default: -> value=-drive, 00:01:35.772 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:35.772 ==> default: -> value=-device, 00:01:35.772 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:35.772 ==> default: -> value=-drive, 00:01:35.772 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:35.772 ==> default: -> value=-device, 00:01:35.772 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:35.772 ==> default: -> value=-device, 00:01:35.772 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:35.772 ==> default: -> value=-device, 00:01:35.772 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:35.772 ==> default: -> value=-drive, 00:01:35.772 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:35.772 ==> default: -> value=-device, 00:01:35.772 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:36.034 ==> default: Creating shared folders metadata... 00:01:36.034 ==> default: Starting domain. 00:01:37.951 ==> default: Waiting for domain to get an IP address... 00:01:56.077 ==> default: Waiting for SSH to become available... 00:01:56.077 ==> default: Configuring and enabling network interfaces... 00:02:00.289 default: SSH address: 192.168.121.253:22 00:02:00.289 default: SSH username: vagrant 00:02:00.289 default: SSH auth method: private key 00:02:02.204 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:10.380 ==> default: Mounting SSHFS shared folder... 00:02:11.763 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:11.764 ==> default: Checking Mount.. 00:02:13.145 ==> default: Folder Successfully Mounted! 00:02:13.145 00:02:13.145 SUCCESS! 00:02:13.145 00:02:13.145 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:13.145 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:13.145 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:13.145 00:02:13.156 [Pipeline] } 00:02:13.173 [Pipeline] // stage 00:02:13.184 [Pipeline] dir 00:02:13.184 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:02:13.186 [Pipeline] { 00:02:13.201 [Pipeline] catchError 00:02:13.203 [Pipeline] { 00:02:13.217 [Pipeline] sh 00:02:13.501 + vagrant ssh-config --host vagrant 00:02:13.501 + sed -ne '/^Host/,$p' 00:02:13.501 + tee ssh_conf 00:02:16.046 Host vagrant 00:02:16.046 HostName 192.168.121.253 00:02:16.046 User vagrant 00:02:16.046 Port 22 00:02:16.046 UserKnownHostsFile /dev/null 00:02:16.046 StrictHostKeyChecking no 00:02:16.046 PasswordAuthentication no 00:02:16.046 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:16.046 IdentitiesOnly yes 00:02:16.046 LogLevel FATAL 00:02:16.046 ForwardAgent yes 00:02:16.046 ForwardX11 yes 00:02:16.046 00:02:16.063 [Pipeline] withEnv 00:02:16.065 [Pipeline] { 00:02:16.082 [Pipeline] sh 00:02:16.365 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:02:16.365 source /etc/os-release 00:02:16.365 [[ -e /image.version ]] && img=$(< /image.version) 00:02:16.365 # Minimal, systemd-like check. 00:02:16.365 if [[ -e /.dockerenv ]]; then 00:02:16.365 # Clear garbage from the node'\''s name: 00:02:16.365 # agt-er_autotest_547-896 -> autotest_547-896 00:02:16.365 # $HOSTNAME is the actual container id 00:02:16.365 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:16.365 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:16.365 # We can assume this is a mount from a host where container is running, 00:02:16.365 # so fetch its hostname to easily identify the target swarm worker. 00:02:16.365 container="$(< /etc/hostname) ($agent)" 00:02:16.365 else 00:02:16.365 # Fallback 00:02:16.365 container=$agent 00:02:16.365 fi 00:02:16.365 fi 00:02:16.365 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:16.365 ' 00:02:16.641 [Pipeline] } 00:02:16.657 [Pipeline] // withEnv 00:02:16.667 [Pipeline] setCustomBuildProperty 00:02:16.683 [Pipeline] stage 00:02:16.685 [Pipeline] { (Tests) 00:02:16.702 [Pipeline] sh 00:02:16.988 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:17.264 [Pipeline] sh 00:02:17.547 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:17.824 [Pipeline] timeout 00:02:17.824 Timeout set to expire in 50 min 00:02:17.826 [Pipeline] { 00:02:17.839 [Pipeline] sh 00:02:18.121 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:02:18.694 HEAD is now at 557f022f6 bdev: Change 1st parameter of bdev_bytes_to_blocks from bdev to desc 00:02:18.709 [Pipeline] sh 00:02:18.994 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:02:19.271 [Pipeline] sh 00:02:19.557 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:19.836 [Pipeline] sh 00:02:20.120 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:02:20.381 ++ readlink -f spdk_repo 00:02:20.381 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:20.381 + [[ -n /home/vagrant/spdk_repo ]] 00:02:20.381 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:20.381 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:20.381 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:20.381 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:20.381 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:20.381 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:20.381 + cd /home/vagrant/spdk_repo 00:02:20.381 + source /etc/os-release 00:02:20.381 ++ NAME='Fedora Linux' 00:02:20.381 ++ VERSION='39 (Cloud Edition)' 00:02:20.381 ++ ID=fedora 00:02:20.381 ++ VERSION_ID=39 00:02:20.381 ++ VERSION_CODENAME= 00:02:20.381 ++ PLATFORM_ID=platform:f39 00:02:20.381 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:20.381 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:20.381 ++ LOGO=fedora-logo-icon 00:02:20.381 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:20.381 ++ HOME_URL=https://fedoraproject.org/ 00:02:20.381 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:20.381 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:20.381 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:20.381 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:20.381 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:20.381 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:20.381 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:20.381 ++ SUPPORT_END=2024-11-12 00:02:20.381 ++ VARIANT='Cloud Edition' 00:02:20.381 ++ VARIANT_ID=cloud 00:02:20.381 + uname -a 00:02:20.381 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:20.381 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:20.648 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:20.911 Hugepages 00:02:20.911 node hugesize free / total 00:02:20.911 node0 1048576kB 0 / 0 00:02:20.911 node0 2048kB 0 / 0 00:02:20.911 00:02:20.911 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:20.911 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:20.911 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:20.911 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:20.911 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:02:20.911 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:20.911 + rm -f /tmp/spdk-ld-path 00:02:20.911 + source autorun-spdk.conf 00:02:20.911 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:20.911 ++ SPDK_TEST_NVME=1 00:02:20.911 ++ SPDK_TEST_FTL=1 00:02:20.911 ++ SPDK_TEST_ISAL=1 00:02:20.911 ++ SPDK_RUN_ASAN=1 00:02:20.911 ++ SPDK_RUN_UBSAN=1 00:02:20.911 ++ SPDK_TEST_XNVME=1 00:02:20.911 ++ SPDK_TEST_NVME_FDP=1 00:02:20.911 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:20.911 ++ RUN_NIGHTLY=1 00:02:20.911 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:20.911 + [[ -n '' ]] 00:02:20.911 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:21.173 + for M in /var/spdk/build-*-manifest.txt 00:02:21.173 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:21.173 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:21.173 + for M in /var/spdk/build-*-manifest.txt 00:02:21.173 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:21.173 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:21.173 + for M in /var/spdk/build-*-manifest.txt 00:02:21.173 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:21.173 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:21.173 ++ uname 00:02:21.173 + [[ Linux == \L\i\n\u\x ]] 00:02:21.173 + sudo dmesg -T 00:02:21.173 + sudo dmesg --clear 00:02:21.173 + dmesg_pid=5027 00:02:21.173 + [[ Fedora Linux == FreeBSD ]] 00:02:21.173 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:21.173 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:21.173 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:21.173 + [[ -x /usr/src/fio-static/fio ]] 00:02:21.173 + sudo dmesg -Tw 00:02:21.173 + export FIO_BIN=/usr/src/fio-static/fio 00:02:21.173 + FIO_BIN=/usr/src/fio-static/fio 00:02:21.173 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:21.173 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:21.173 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:21.173 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:21.173 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:21.173 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:21.173 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:21.173 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:21.173 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:21.173 18:11:39 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:21.173 18:11:39 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:21.173 18:11:39 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:21.173 18:11:39 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:02:21.173 18:11:39 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:02:21.173 18:11:39 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:02:21.173 18:11:39 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:02:21.173 18:11:39 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:02:21.173 18:11:39 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:02:21.173 18:11:39 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:02:21.173 18:11:39 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:21.173 18:11:39 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=1 00:02:21.173 18:11:39 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:21.173 18:11:39 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:21.173 18:11:39 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:21.173 18:11:39 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:21.173 18:11:39 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:21.173 18:11:39 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:21.173 18:11:39 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:21.173 18:11:39 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:21.173 18:11:39 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:21.173 18:11:39 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:21.173 18:11:39 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:21.173 18:11:39 -- paths/export.sh@5 -- $ export PATH 00:02:21.173 18:11:39 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:21.173 18:11:39 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:21.173 18:11:39 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:21.436 18:11:39 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732126299.XXXXXX 00:02:21.436 18:11:39 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732126299.NOHlJQ 00:02:21.436 18:11:39 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:21.436 18:11:39 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:02:21.436 18:11:39 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:21.436 18:11:39 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:21.436 18:11:39 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:21.436 18:11:39 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:21.436 18:11:39 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:21.436 18:11:39 -- common/autotest_common.sh@10 -- $ set +x 00:02:21.436 18:11:39 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:02:21.436 18:11:39 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:21.436 18:11:39 -- pm/common@17 -- $ local monitor 00:02:21.436 18:11:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:21.436 18:11:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:21.436 18:11:39 -- pm/common@25 -- $ sleep 1 00:02:21.436 18:11:39 -- pm/common@21 -- $ date +%s 00:02:21.436 18:11:39 -- pm/common@21 -- $ date +%s 00:02:21.436 18:11:39 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732126299 00:02:21.436 18:11:39 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732126299 00:02:21.436 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732126299_collect-cpu-load.pm.log 00:02:21.436 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732126299_collect-vmstat.pm.log 00:02:22.381 18:11:40 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:22.381 18:11:40 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:22.381 18:11:40 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:22.381 18:11:40 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:22.381 18:11:40 -- spdk/autobuild.sh@16 -- $ date -u 00:02:22.381 Wed Nov 20 06:11:40 PM UTC 2024 00:02:22.381 18:11:40 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:22.381 v25.01-pre-219-g557f022f6 00:02:22.381 18:11:40 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:22.381 18:11:40 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:22.381 18:11:40 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:22.381 18:11:40 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:22.381 18:11:40 -- common/autotest_common.sh@10 -- $ set +x 00:02:22.381 ************************************ 00:02:22.381 START TEST asan 00:02:22.381 ************************************ 00:02:22.381 using asan 00:02:22.381 18:11:40 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:02:22.381 00:02:22.381 real 0m0.000s 00:02:22.381 user 0m0.000s 00:02:22.381 sys 0m0.000s 00:02:22.381 ************************************ 00:02:22.381 END TEST asan 00:02:22.381 ************************************ 00:02:22.381 18:11:40 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:22.381 18:11:40 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:22.381 18:11:40 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:22.381 18:11:40 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:22.381 18:11:40 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:22.381 18:11:40 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:22.381 18:11:40 -- common/autotest_common.sh@10 -- $ set +x 00:02:22.381 ************************************ 00:02:22.381 START TEST ubsan 00:02:22.381 ************************************ 00:02:22.381 using ubsan 00:02:22.381 18:11:40 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:22.381 00:02:22.381 real 0m0.000s 00:02:22.381 user 0m0.000s 00:02:22.381 sys 0m0.000s 00:02:22.381 ************************************ 00:02:22.381 END TEST ubsan 00:02:22.381 ************************************ 00:02:22.381 18:11:40 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:22.381 18:11:40 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:22.381 18:11:40 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:22.381 18:11:40 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:22.381 18:11:40 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:22.381 18:11:40 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:22.381 18:11:40 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:22.381 18:11:40 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:22.381 18:11:40 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:22.381 18:11:40 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:22.381 18:11:40 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:02:22.643 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:22.643 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:22.903 Using 'verbs' RDMA provider 00:02:33.882 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:46.140 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:46.140 Creating mk/config.mk...done. 00:02:46.140 Creating mk/cc.flags.mk...done. 00:02:46.140 Type 'make' to build. 00:02:46.140 18:12:03 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:46.140 18:12:03 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:46.140 18:12:03 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:46.140 18:12:03 -- common/autotest_common.sh@10 -- $ set +x 00:02:46.140 ************************************ 00:02:46.140 START TEST make 00:02:46.140 ************************************ 00:02:46.140 18:12:03 make -- common/autotest_common.sh@1129 -- $ make -j10 00:02:46.140 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:02:46.140 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:02:46.140 meson setup builddir \ 00:02:46.140 -Dwith-libaio=enabled \ 00:02:46.140 -Dwith-liburing=enabled \ 00:02:46.140 -Dwith-libvfn=disabled \ 00:02:46.140 -Dwith-spdk=disabled \ 00:02:46.140 -Dexamples=false \ 00:02:46.140 -Dtests=false \ 00:02:46.140 -Dtools=false && \ 00:02:46.140 meson compile -C builddir && \ 00:02:46.140 cd -) 00:02:46.140 make[1]: Nothing to be done for 'all'. 00:02:47.078 The Meson build system 00:02:47.078 Version: 1.5.0 00:02:47.078 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:02:47.078 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:47.078 Build type: native build 00:02:47.078 Project name: xnvme 00:02:47.078 Project version: 0.7.5 00:02:47.078 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:47.078 C linker for the host machine: cc ld.bfd 2.40-14 00:02:47.078 Host machine cpu family: x86_64 00:02:47.078 Host machine cpu: x86_64 00:02:47.078 Message: host_machine.system: linux 00:02:47.078 Compiler for C supports arguments -Wno-missing-braces: YES 00:02:47.078 Compiler for C supports arguments -Wno-cast-function-type: YES 00:02:47.078 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:47.078 Run-time dependency threads found: YES 00:02:47.078 Has header "setupapi.h" : NO 00:02:47.078 Has header "linux/blkzoned.h" : YES 00:02:47.078 Has header "linux/blkzoned.h" : YES (cached) 00:02:47.078 Has header "libaio.h" : YES 00:02:47.078 Library aio found: YES 00:02:47.078 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:47.078 Run-time dependency liburing found: YES 2.2 00:02:47.078 Dependency libvfn skipped: feature with-libvfn disabled 00:02:47.078 Found CMake: /usr/bin/cmake (3.27.7) 00:02:47.078 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:02:47.078 Subproject spdk : skipped: feature with-spdk disabled 00:02:47.078 Run-time dependency appleframeworks found: NO (tried framework) 00:02:47.078 Run-time dependency appleframeworks found: NO (tried framework) 00:02:47.078 Library rt found: YES 00:02:47.078 Checking for function "clock_gettime" with dependency -lrt: YES 00:02:47.079 Configuring xnvme_config.h using configuration 00:02:47.079 Configuring xnvme.spec using configuration 00:02:47.079 Run-time dependency bash-completion found: YES 2.11 00:02:47.079 Message: Bash-completions: /usr/share/bash-completion/completions 00:02:47.079 Program cp found: YES (/usr/bin/cp) 00:02:47.079 Build targets in project: 3 00:02:47.079 00:02:47.079 xnvme 0.7.5 00:02:47.079 00:02:47.079 Subprojects 00:02:47.079 spdk : NO Feature 'with-spdk' disabled 00:02:47.079 00:02:47.079 User defined options 00:02:47.079 examples : false 00:02:47.079 tests : false 00:02:47.079 tools : false 00:02:47.079 with-libaio : enabled 00:02:47.079 with-liburing: enabled 00:02:47.079 with-libvfn : disabled 00:02:47.079 with-spdk : disabled 00:02:47.079 00:02:47.079 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:47.337 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:02:47.337 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:02:47.337 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:02:47.337 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:02:47.337 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:02:47.337 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:02:47.337 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:02:47.337 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:02:47.337 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:02:47.337 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:02:47.337 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:02:47.596 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:02:47.596 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:02:47.596 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:02:47.596 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:02:47.596 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:02:47.596 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:02:47.596 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:02:47.596 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:02:47.596 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:02:47.596 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:02:47.596 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:02:47.596 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:02:47.596 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:02:47.596 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:02:47.596 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:02:47.596 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:02:47.596 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:02:47.596 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:02:47.596 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:02:47.596 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:02:47.596 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:02:47.597 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:02:47.597 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:02:47.597 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:02:47.597 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:02:47.597 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:02:47.597 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:02:47.597 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:02:47.597 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:02:47.597 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:02:47.597 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:02:47.597 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:02:47.856 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:02:47.856 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:02:47.856 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:02:47.856 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:02:47.856 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:02:47.856 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:02:47.856 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:02:47.856 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:02:47.856 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:02:47.856 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:02:47.856 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:02:47.856 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:02:47.856 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:02:47.856 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:02:47.856 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:02:47.856 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:02:47.856 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:02:47.856 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:02:47.856 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:02:47.856 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:02:47.856 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:02:47.856 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:02:47.856 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:02:48.114 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:02:48.114 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:02:48.114 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:02:48.114 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:02:48.114 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:02:48.114 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:02:48.114 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:02:48.114 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:02:48.373 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:02:48.373 [75/76] Linking static target lib/libxnvme.a 00:02:48.373 [76/76] Linking target lib/libxnvme.so.0.7.5 00:02:48.373 INFO: autodetecting backend as ninja 00:02:48.373 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:48.373 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:02:54.928 The Meson build system 00:02:54.928 Version: 1.5.0 00:02:54.928 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:54.928 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:54.928 Build type: native build 00:02:54.928 Program cat found: YES (/usr/bin/cat) 00:02:54.928 Project name: DPDK 00:02:54.928 Project version: 24.03.0 00:02:54.928 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:54.928 C linker for the host machine: cc ld.bfd 2.40-14 00:02:54.928 Host machine cpu family: x86_64 00:02:54.928 Host machine cpu: x86_64 00:02:54.928 Message: ## Building in Developer Mode ## 00:02:54.928 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:54.928 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:54.928 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:54.928 Program python3 found: YES (/usr/bin/python3) 00:02:54.928 Program cat found: YES (/usr/bin/cat) 00:02:54.928 Compiler for C supports arguments -march=native: YES 00:02:54.928 Checking for size of "void *" : 8 00:02:54.928 Checking for size of "void *" : 8 (cached) 00:02:54.928 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:54.928 Library m found: YES 00:02:54.928 Library numa found: YES 00:02:54.928 Has header "numaif.h" : YES 00:02:54.928 Library fdt found: NO 00:02:54.928 Library execinfo found: NO 00:02:54.928 Has header "execinfo.h" : YES 00:02:54.928 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:54.928 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:54.928 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:54.928 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:54.928 Run-time dependency openssl found: YES 3.1.1 00:02:54.928 Run-time dependency libpcap found: YES 1.10.4 00:02:54.928 Has header "pcap.h" with dependency libpcap: YES 00:02:54.928 Compiler for C supports arguments -Wcast-qual: YES 00:02:54.928 Compiler for C supports arguments -Wdeprecated: YES 00:02:54.928 Compiler for C supports arguments -Wformat: YES 00:02:54.928 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:54.928 Compiler for C supports arguments -Wformat-security: NO 00:02:54.928 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:54.929 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:54.929 Compiler for C supports arguments -Wnested-externs: YES 00:02:54.929 Compiler for C supports arguments -Wold-style-definition: YES 00:02:54.929 Compiler for C supports arguments -Wpointer-arith: YES 00:02:54.929 Compiler for C supports arguments -Wsign-compare: YES 00:02:54.929 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:54.929 Compiler for C supports arguments -Wundef: YES 00:02:54.929 Compiler for C supports arguments -Wwrite-strings: YES 00:02:54.929 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:54.929 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:54.929 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:54.929 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:54.929 Program objdump found: YES (/usr/bin/objdump) 00:02:54.929 Compiler for C supports arguments -mavx512f: YES 00:02:54.929 Checking if "AVX512 checking" compiles: YES 00:02:54.929 Fetching value of define "__SSE4_2__" : 1 00:02:54.929 Fetching value of define "__AES__" : 1 00:02:54.929 Fetching value of define "__AVX__" : 1 00:02:54.929 Fetching value of define "__AVX2__" : 1 00:02:54.929 Fetching value of define "__AVX512BW__" : 1 00:02:54.929 Fetching value of define "__AVX512CD__" : 1 00:02:54.929 Fetching value of define "__AVX512DQ__" : 1 00:02:54.929 Fetching value of define "__AVX512F__" : 1 00:02:54.929 Fetching value of define "__AVX512VL__" : 1 00:02:54.929 Fetching value of define "__PCLMUL__" : 1 00:02:54.929 Fetching value of define "__RDRND__" : 1 00:02:54.929 Fetching value of define "__RDSEED__" : 1 00:02:54.929 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:54.929 Fetching value of define "__znver1__" : (undefined) 00:02:54.929 Fetching value of define "__znver2__" : (undefined) 00:02:54.929 Fetching value of define "__znver3__" : (undefined) 00:02:54.929 Fetching value of define "__znver4__" : (undefined) 00:02:54.929 Library asan found: YES 00:02:54.929 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:54.929 Message: lib/log: Defining dependency "log" 00:02:54.929 Message: lib/kvargs: Defining dependency "kvargs" 00:02:54.929 Message: lib/telemetry: Defining dependency "telemetry" 00:02:54.929 Library rt found: YES 00:02:54.929 Checking for function "getentropy" : NO 00:02:54.929 Message: lib/eal: Defining dependency "eal" 00:02:54.929 Message: lib/ring: Defining dependency "ring" 00:02:54.929 Message: lib/rcu: Defining dependency "rcu" 00:02:54.929 Message: lib/mempool: Defining dependency "mempool" 00:02:54.929 Message: lib/mbuf: Defining dependency "mbuf" 00:02:54.929 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:54.929 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:54.929 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:54.929 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:54.929 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:54.929 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:54.929 Compiler for C supports arguments -mpclmul: YES 00:02:54.929 Compiler for C supports arguments -maes: YES 00:02:54.929 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:54.929 Compiler for C supports arguments -mavx512bw: YES 00:02:54.929 Compiler for C supports arguments -mavx512dq: YES 00:02:54.929 Compiler for C supports arguments -mavx512vl: YES 00:02:54.929 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:54.929 Compiler for C supports arguments -mavx2: YES 00:02:54.929 Compiler for C supports arguments -mavx: YES 00:02:54.929 Message: lib/net: Defining dependency "net" 00:02:54.929 Message: lib/meter: Defining dependency "meter" 00:02:54.929 Message: lib/ethdev: Defining dependency "ethdev" 00:02:54.929 Message: lib/pci: Defining dependency "pci" 00:02:54.929 Message: lib/cmdline: Defining dependency "cmdline" 00:02:54.929 Message: lib/hash: Defining dependency "hash" 00:02:54.929 Message: lib/timer: Defining dependency "timer" 00:02:54.929 Message: lib/compressdev: Defining dependency "compressdev" 00:02:54.929 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:54.929 Message: lib/dmadev: Defining dependency "dmadev" 00:02:54.929 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:54.929 Message: lib/power: Defining dependency "power" 00:02:54.929 Message: lib/reorder: Defining dependency "reorder" 00:02:54.929 Message: lib/security: Defining dependency "security" 00:02:54.929 Has header "linux/userfaultfd.h" : YES 00:02:54.929 Has header "linux/vduse.h" : YES 00:02:54.929 Message: lib/vhost: Defining dependency "vhost" 00:02:54.929 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:54.929 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:54.929 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:54.929 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:54.929 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:54.929 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:54.929 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:54.929 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:54.929 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:54.929 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:54.929 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:54.929 Configuring doxy-api-html.conf using configuration 00:02:54.929 Configuring doxy-api-man.conf using configuration 00:02:54.929 Program mandb found: YES (/usr/bin/mandb) 00:02:54.929 Program sphinx-build found: NO 00:02:54.929 Configuring rte_build_config.h using configuration 00:02:54.929 Message: 00:02:54.929 ================= 00:02:54.929 Applications Enabled 00:02:54.929 ================= 00:02:54.929 00:02:54.929 apps: 00:02:54.929 00:02:54.929 00:02:54.929 Message: 00:02:54.929 ================= 00:02:54.929 Libraries Enabled 00:02:54.929 ================= 00:02:54.929 00:02:54.929 libs: 00:02:54.929 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:54.929 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:54.929 cryptodev, dmadev, power, reorder, security, vhost, 00:02:54.929 00:02:54.929 Message: 00:02:54.929 =============== 00:02:54.929 Drivers Enabled 00:02:54.929 =============== 00:02:54.929 00:02:54.929 common: 00:02:54.929 00:02:54.929 bus: 00:02:54.929 pci, vdev, 00:02:54.929 mempool: 00:02:54.929 ring, 00:02:54.929 dma: 00:02:54.929 00:02:54.929 net: 00:02:54.929 00:02:54.929 crypto: 00:02:54.929 00:02:54.929 compress: 00:02:54.929 00:02:54.929 vdpa: 00:02:54.929 00:02:54.929 00:02:54.929 Message: 00:02:54.929 ================= 00:02:54.929 Content Skipped 00:02:54.929 ================= 00:02:54.929 00:02:54.929 apps: 00:02:54.929 dumpcap: explicitly disabled via build config 00:02:54.929 graph: explicitly disabled via build config 00:02:54.929 pdump: explicitly disabled via build config 00:02:54.929 proc-info: explicitly disabled via build config 00:02:54.929 test-acl: explicitly disabled via build config 00:02:54.929 test-bbdev: explicitly disabled via build config 00:02:54.929 test-cmdline: explicitly disabled via build config 00:02:54.929 test-compress-perf: explicitly disabled via build config 00:02:54.929 test-crypto-perf: explicitly disabled via build config 00:02:54.929 test-dma-perf: explicitly disabled via build config 00:02:54.929 test-eventdev: explicitly disabled via build config 00:02:54.929 test-fib: explicitly disabled via build config 00:02:54.929 test-flow-perf: explicitly disabled via build config 00:02:54.929 test-gpudev: explicitly disabled via build config 00:02:54.929 test-mldev: explicitly disabled via build config 00:02:54.929 test-pipeline: explicitly disabled via build config 00:02:54.929 test-pmd: explicitly disabled via build config 00:02:54.929 test-regex: explicitly disabled via build config 00:02:54.929 test-sad: explicitly disabled via build config 00:02:54.929 test-security-perf: explicitly disabled via build config 00:02:54.929 00:02:54.929 libs: 00:02:54.929 argparse: explicitly disabled via build config 00:02:54.929 metrics: explicitly disabled via build config 00:02:54.929 acl: explicitly disabled via build config 00:02:54.929 bbdev: explicitly disabled via build config 00:02:54.929 bitratestats: explicitly disabled via build config 00:02:54.929 bpf: explicitly disabled via build config 00:02:54.929 cfgfile: explicitly disabled via build config 00:02:54.929 distributor: explicitly disabled via build config 00:02:54.929 efd: explicitly disabled via build config 00:02:54.929 eventdev: explicitly disabled via build config 00:02:54.929 dispatcher: explicitly disabled via build config 00:02:54.929 gpudev: explicitly disabled via build config 00:02:54.929 gro: explicitly disabled via build config 00:02:54.929 gso: explicitly disabled via build config 00:02:54.929 ip_frag: explicitly disabled via build config 00:02:54.929 jobstats: explicitly disabled via build config 00:02:54.929 latencystats: explicitly disabled via build config 00:02:54.929 lpm: explicitly disabled via build config 00:02:54.929 member: explicitly disabled via build config 00:02:54.929 pcapng: explicitly disabled via build config 00:02:54.929 rawdev: explicitly disabled via build config 00:02:54.929 regexdev: explicitly disabled via build config 00:02:54.929 mldev: explicitly disabled via build config 00:02:54.929 rib: explicitly disabled via build config 00:02:54.929 sched: explicitly disabled via build config 00:02:54.929 stack: explicitly disabled via build config 00:02:54.929 ipsec: explicitly disabled via build config 00:02:54.929 pdcp: explicitly disabled via build config 00:02:54.929 fib: explicitly disabled via build config 00:02:54.929 port: explicitly disabled via build config 00:02:54.929 pdump: explicitly disabled via build config 00:02:54.929 table: explicitly disabled via build config 00:02:54.929 pipeline: explicitly disabled via build config 00:02:54.929 graph: explicitly disabled via build config 00:02:54.929 node: explicitly disabled via build config 00:02:54.929 00:02:54.929 drivers: 00:02:54.929 common/cpt: not in enabled drivers build config 00:02:54.929 common/dpaax: not in enabled drivers build config 00:02:54.929 common/iavf: not in enabled drivers build config 00:02:54.929 common/idpf: not in enabled drivers build config 00:02:54.929 common/ionic: not in enabled drivers build config 00:02:54.930 common/mvep: not in enabled drivers build config 00:02:54.930 common/octeontx: not in enabled drivers build config 00:02:54.930 bus/auxiliary: not in enabled drivers build config 00:02:54.930 bus/cdx: not in enabled drivers build config 00:02:54.930 bus/dpaa: not in enabled drivers build config 00:02:54.930 bus/fslmc: not in enabled drivers build config 00:02:54.930 bus/ifpga: not in enabled drivers build config 00:02:54.930 bus/platform: not in enabled drivers build config 00:02:54.930 bus/uacce: not in enabled drivers build config 00:02:54.930 bus/vmbus: not in enabled drivers build config 00:02:54.930 common/cnxk: not in enabled drivers build config 00:02:54.930 common/mlx5: not in enabled drivers build config 00:02:54.930 common/nfp: not in enabled drivers build config 00:02:54.930 common/nitrox: not in enabled drivers build config 00:02:54.930 common/qat: not in enabled drivers build config 00:02:54.930 common/sfc_efx: not in enabled drivers build config 00:02:54.930 mempool/bucket: not in enabled drivers build config 00:02:54.930 mempool/cnxk: not in enabled drivers build config 00:02:54.930 mempool/dpaa: not in enabled drivers build config 00:02:54.930 mempool/dpaa2: not in enabled drivers build config 00:02:54.930 mempool/octeontx: not in enabled drivers build config 00:02:54.930 mempool/stack: not in enabled drivers build config 00:02:54.930 dma/cnxk: not in enabled drivers build config 00:02:54.930 dma/dpaa: not in enabled drivers build config 00:02:54.930 dma/dpaa2: not in enabled drivers build config 00:02:54.930 dma/hisilicon: not in enabled drivers build config 00:02:54.930 dma/idxd: not in enabled drivers build config 00:02:54.930 dma/ioat: not in enabled drivers build config 00:02:54.930 dma/skeleton: not in enabled drivers build config 00:02:54.930 net/af_packet: not in enabled drivers build config 00:02:54.930 net/af_xdp: not in enabled drivers build config 00:02:54.930 net/ark: not in enabled drivers build config 00:02:54.930 net/atlantic: not in enabled drivers build config 00:02:54.930 net/avp: not in enabled drivers build config 00:02:54.930 net/axgbe: not in enabled drivers build config 00:02:54.930 net/bnx2x: not in enabled drivers build config 00:02:54.930 net/bnxt: not in enabled drivers build config 00:02:54.930 net/bonding: not in enabled drivers build config 00:02:54.930 net/cnxk: not in enabled drivers build config 00:02:54.930 net/cpfl: not in enabled drivers build config 00:02:54.930 net/cxgbe: not in enabled drivers build config 00:02:54.930 net/dpaa: not in enabled drivers build config 00:02:54.930 net/dpaa2: not in enabled drivers build config 00:02:54.930 net/e1000: not in enabled drivers build config 00:02:54.930 net/ena: not in enabled drivers build config 00:02:54.930 net/enetc: not in enabled drivers build config 00:02:54.930 net/enetfec: not in enabled drivers build config 00:02:54.930 net/enic: not in enabled drivers build config 00:02:54.930 net/failsafe: not in enabled drivers build config 00:02:54.930 net/fm10k: not in enabled drivers build config 00:02:54.930 net/gve: not in enabled drivers build config 00:02:54.930 net/hinic: not in enabled drivers build config 00:02:54.930 net/hns3: not in enabled drivers build config 00:02:54.930 net/i40e: not in enabled drivers build config 00:02:54.930 net/iavf: not in enabled drivers build config 00:02:54.930 net/ice: not in enabled drivers build config 00:02:54.930 net/idpf: not in enabled drivers build config 00:02:54.930 net/igc: not in enabled drivers build config 00:02:54.930 net/ionic: not in enabled drivers build config 00:02:54.930 net/ipn3ke: not in enabled drivers build config 00:02:54.930 net/ixgbe: not in enabled drivers build config 00:02:54.930 net/mana: not in enabled drivers build config 00:02:54.930 net/memif: not in enabled drivers build config 00:02:54.930 net/mlx4: not in enabled drivers build config 00:02:54.930 net/mlx5: not in enabled drivers build config 00:02:54.930 net/mvneta: not in enabled drivers build config 00:02:54.930 net/mvpp2: not in enabled drivers build config 00:02:54.930 net/netvsc: not in enabled drivers build config 00:02:54.930 net/nfb: not in enabled drivers build config 00:02:54.930 net/nfp: not in enabled drivers build config 00:02:54.930 net/ngbe: not in enabled drivers build config 00:02:54.930 net/null: not in enabled drivers build config 00:02:54.930 net/octeontx: not in enabled drivers build config 00:02:54.930 net/octeon_ep: not in enabled drivers build config 00:02:54.930 net/pcap: not in enabled drivers build config 00:02:54.930 net/pfe: not in enabled drivers build config 00:02:54.930 net/qede: not in enabled drivers build config 00:02:54.930 net/ring: not in enabled drivers build config 00:02:54.930 net/sfc: not in enabled drivers build config 00:02:54.930 net/softnic: not in enabled drivers build config 00:02:54.930 net/tap: not in enabled drivers build config 00:02:54.930 net/thunderx: not in enabled drivers build config 00:02:54.930 net/txgbe: not in enabled drivers build config 00:02:54.930 net/vdev_netvsc: not in enabled drivers build config 00:02:54.930 net/vhost: not in enabled drivers build config 00:02:54.930 net/virtio: not in enabled drivers build config 00:02:54.930 net/vmxnet3: not in enabled drivers build config 00:02:54.930 raw/*: missing internal dependency, "rawdev" 00:02:54.930 crypto/armv8: not in enabled drivers build config 00:02:54.930 crypto/bcmfs: not in enabled drivers build config 00:02:54.930 crypto/caam_jr: not in enabled drivers build config 00:02:54.930 crypto/ccp: not in enabled drivers build config 00:02:54.930 crypto/cnxk: not in enabled drivers build config 00:02:54.930 crypto/dpaa_sec: not in enabled drivers build config 00:02:54.930 crypto/dpaa2_sec: not in enabled drivers build config 00:02:54.930 crypto/ipsec_mb: not in enabled drivers build config 00:02:54.930 crypto/mlx5: not in enabled drivers build config 00:02:54.930 crypto/mvsam: not in enabled drivers build config 00:02:54.930 crypto/nitrox: not in enabled drivers build config 00:02:54.930 crypto/null: not in enabled drivers build config 00:02:54.930 crypto/octeontx: not in enabled drivers build config 00:02:54.930 crypto/openssl: not in enabled drivers build config 00:02:54.930 crypto/scheduler: not in enabled drivers build config 00:02:54.930 crypto/uadk: not in enabled drivers build config 00:02:54.930 crypto/virtio: not in enabled drivers build config 00:02:54.930 compress/isal: not in enabled drivers build config 00:02:54.930 compress/mlx5: not in enabled drivers build config 00:02:54.930 compress/nitrox: not in enabled drivers build config 00:02:54.930 compress/octeontx: not in enabled drivers build config 00:02:54.930 compress/zlib: not in enabled drivers build config 00:02:54.930 regex/*: missing internal dependency, "regexdev" 00:02:54.930 ml/*: missing internal dependency, "mldev" 00:02:54.930 vdpa/ifc: not in enabled drivers build config 00:02:54.930 vdpa/mlx5: not in enabled drivers build config 00:02:54.930 vdpa/nfp: not in enabled drivers build config 00:02:54.930 vdpa/sfc: not in enabled drivers build config 00:02:54.930 event/*: missing internal dependency, "eventdev" 00:02:54.930 baseband/*: missing internal dependency, "bbdev" 00:02:54.930 gpu/*: missing internal dependency, "gpudev" 00:02:54.930 00:02:54.930 00:02:54.930 Build targets in project: 84 00:02:54.930 00:02:54.930 DPDK 24.03.0 00:02:54.930 00:02:54.930 User defined options 00:02:54.930 buildtype : debug 00:02:54.930 default_library : shared 00:02:54.930 libdir : lib 00:02:54.930 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:54.930 b_sanitize : address 00:02:54.930 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:54.930 c_link_args : 00:02:54.930 cpu_instruction_set: native 00:02:54.930 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:54.930 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:54.930 enable_docs : false 00:02:54.930 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:54.930 enable_kmods : false 00:02:54.930 max_lcores : 128 00:02:54.930 tests : false 00:02:54.930 00:02:54.930 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:54.930 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:54.930 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:54.930 [2/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:54.930 [3/267] Linking static target lib/librte_kvargs.a 00:02:54.930 [4/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:54.930 [5/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:54.930 [6/267] Linking static target lib/librte_log.a 00:02:54.930 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:54.930 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:55.189 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:55.189 [10/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:55.189 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:55.189 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:55.189 [13/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.189 [14/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:55.189 [15/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:55.189 [16/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:55.189 [17/267] Linking static target lib/librte_telemetry.a 00:02:55.189 [18/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:55.447 [19/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:55.447 [20/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:55.447 [21/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:55.447 [22/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.447 [23/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:55.447 [24/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:55.706 [25/267] Linking target lib/librte_log.so.24.1 00:02:55.706 [26/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:55.706 [27/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:55.706 [28/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:55.706 [29/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:55.706 [30/267] Linking target lib/librte_kvargs.so.24.1 00:02:55.706 [31/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:55.706 [32/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:55.965 [33/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:55.965 [34/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:55.965 [35/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.965 [36/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:55.965 [37/267] Linking target lib/librte_telemetry.so.24.1 00:02:55.965 [38/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:55.965 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:55.965 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:55.965 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:55.965 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:55.965 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:55.965 [44/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:56.224 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:56.224 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:56.224 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:56.224 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:56.224 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:56.483 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:56.483 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:56.483 [52/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:56.483 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:56.483 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:56.483 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:56.483 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:56.483 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:56.742 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:56.742 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:56.742 [60/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:56.742 [61/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:56.742 [62/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:56.742 [63/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:56.742 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:56.742 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:57.000 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:57.000 [67/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:57.000 [68/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:57.259 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:57.259 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:57.259 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:57.259 [72/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:57.259 [73/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:57.259 [74/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:57.259 [75/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:57.259 [76/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:57.259 [77/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:57.259 [78/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:57.259 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:57.517 [80/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:57.517 [81/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:57.517 [82/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:57.517 [83/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:57.517 [84/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:57.776 [85/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:57.776 [86/267] Linking static target lib/librte_ring.a 00:02:57.776 [87/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:57.776 [88/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:57.776 [89/267] Linking static target lib/librte_eal.a 00:02:57.776 [90/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:57.776 [91/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:58.033 [92/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:58.033 [93/267] Linking static target lib/librte_mempool.a 00:02:58.033 [94/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:58.033 [95/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:58.033 [96/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:58.033 [97/267] Linking static target lib/librte_rcu.a 00:02:58.033 [98/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.290 [99/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:58.291 [100/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:58.291 [101/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:58.291 [102/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:58.549 [103/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:58.549 [104/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.549 [105/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:58.549 [106/267] Linking static target lib/librte_net.a 00:02:58.549 [107/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:58.549 [108/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:58.549 [109/267] Linking static target lib/librte_mbuf.a 00:02:58.549 [110/267] Linking static target lib/librte_meter.a 00:02:58.807 [111/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:58.807 [112/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:58.807 [113/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:58.807 [114/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.807 [115/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.065 [116/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.065 [117/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:59.065 [118/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:59.324 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:59.324 [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:59.324 [121/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:59.324 [122/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.582 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:59.582 [124/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:59.582 [125/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:59.582 [126/267] Linking static target lib/librte_pci.a 00:02:59.582 [127/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:59.582 [128/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:59.582 [129/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:59.841 [130/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:59.841 [131/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:59.841 [132/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:59.841 [133/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:59.841 [134/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:59.841 [135/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:59.841 [136/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:59.841 [137/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:59.841 [138/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.841 [139/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:59.841 [140/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:59.841 [141/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:59.841 [142/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:59.841 [143/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:59.841 [144/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:00.100 [145/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:00.100 [146/267] Linking static target lib/librte_cmdline.a 00:03:00.100 [147/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:00.100 [148/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:00.100 [149/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:00.100 [150/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:00.100 [151/267] Linking static target lib/librte_timer.a 00:03:00.359 [152/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:00.359 [153/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:00.359 [154/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:00.617 [155/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:00.617 [156/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:00.617 [157/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:00.617 [158/267] Linking static target lib/librte_compressdev.a 00:03:00.617 [159/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:00.617 [160/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.875 [161/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:00.875 [162/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:00.875 [163/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:00.875 [164/267] Linking static target lib/librte_dmadev.a 00:03:01.133 [165/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:01.133 [166/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:01.133 [167/267] Linking static target lib/librte_hash.a 00:03:01.133 [168/267] Linking static target lib/librte_ethdev.a 00:03:01.133 [169/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:01.133 [170/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:01.133 [171/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:01.133 [172/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.133 [173/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:01.392 [174/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.392 [175/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:01.392 [176/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:01.392 [177/267] Linking static target lib/librte_cryptodev.a 00:03:01.392 [178/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:01.392 [179/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:01.392 [180/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:01.392 [181/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.650 [182/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:01.650 [183/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:01.650 [184/267] Linking static target lib/librte_power.a 00:03:01.909 [185/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:01.909 [186/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:01.909 [187/267] Linking static target lib/librte_reorder.a 00:03:01.909 [188/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:01.909 [189/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:01.909 [190/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.909 [191/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:01.909 [192/267] Linking static target lib/librte_security.a 00:03:02.168 [193/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.428 [194/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.428 [195/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:02.686 [196/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.686 [197/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:02.686 [198/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:02.686 [199/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:02.944 [200/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:02.944 [201/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:02.945 [202/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:02.945 [203/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:02.945 [204/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:02.945 [205/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:03.203 [206/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:03.203 [207/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:03.203 [208/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:03.203 [209/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:03.203 [210/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.462 [211/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:03.462 [212/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:03.462 [213/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:03.462 [214/267] Linking static target drivers/librte_bus_vdev.a 00:03:03.462 [215/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:03.462 [216/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:03.462 [217/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:03.462 [218/267] Linking static target drivers/librte_bus_pci.a 00:03:03.462 [219/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:03.462 [220/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:03.721 [221/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.721 [222/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:03.721 [223/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:03.721 [224/267] Linking static target drivers/librte_mempool_ring.a 00:03:03.721 [225/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:03.721 [226/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.979 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:04.915 [228/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.915 [229/267] Linking target lib/librte_eal.so.24.1 00:03:05.173 [230/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:05.173 [231/267] Linking target lib/librte_meter.so.24.1 00:03:05.173 [232/267] Linking target lib/librte_ring.so.24.1 00:03:05.173 [233/267] Linking target lib/librte_pci.so.24.1 00:03:05.173 [234/267] Linking target lib/librte_timer.so.24.1 00:03:05.173 [235/267] Linking target lib/librte_dmadev.so.24.1 00:03:05.173 [236/267] Linking target drivers/librte_bus_vdev.so.24.1 00:03:05.173 [237/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:05.173 [238/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:05.173 [239/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:05.173 [240/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:05.173 [241/267] Linking target drivers/librte_bus_pci.so.24.1 00:03:05.431 [242/267] Linking target lib/librte_rcu.so.24.1 00:03:05.431 [243/267] Linking target lib/librte_mempool.so.24.1 00:03:05.431 [244/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:05.431 [245/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:05.431 [246/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:05.431 [247/267] Linking target lib/librte_mbuf.so.24.1 00:03:05.431 [248/267] Linking target drivers/librte_mempool_ring.so.24.1 00:03:05.431 [249/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:05.689 [250/267] Linking target lib/librte_reorder.so.24.1 00:03:05.689 [251/267] Linking target lib/librte_compressdev.so.24.1 00:03:05.689 [252/267] Linking target lib/librte_cryptodev.so.24.1 00:03:05.689 [253/267] Linking target lib/librte_net.so.24.1 00:03:05.689 [254/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:05.689 [255/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:05.689 [256/267] Linking target lib/librte_cmdline.so.24.1 00:03:05.689 [257/267] Linking target lib/librte_hash.so.24.1 00:03:05.689 [258/267] Linking target lib/librte_security.so.24.1 00:03:05.953 [259/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:06.212 [260/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.212 [261/267] Linking target lib/librte_ethdev.so.24.1 00:03:06.471 [262/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:06.471 [263/267] Linking target lib/librte_power.so.24.1 00:03:06.471 [264/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:06.729 [265/267] Linking static target lib/librte_vhost.a 00:03:07.664 [266/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.664 [267/267] Linking target lib/librte_vhost.so.24.1 00:03:07.664 INFO: autodetecting backend as ninja 00:03:07.664 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:22.542 CC lib/ut/ut.o 00:03:22.543 CC lib/log/log_flags.o 00:03:22.543 CC lib/log/log.o 00:03:22.543 CC lib/log/log_deprecated.o 00:03:22.543 CC lib/ut_mock/mock.o 00:03:22.543 LIB libspdk_ut.a 00:03:22.543 SO libspdk_ut.so.2.0 00:03:22.543 LIB libspdk_ut_mock.a 00:03:22.543 LIB libspdk_log.a 00:03:22.543 SO libspdk_ut_mock.so.6.0 00:03:22.543 SO libspdk_log.so.7.1 00:03:22.543 SYMLINK libspdk_ut.so 00:03:22.543 SYMLINK libspdk_ut_mock.so 00:03:22.543 SYMLINK libspdk_log.so 00:03:22.543 CXX lib/trace_parser/trace.o 00:03:22.543 CC lib/ioat/ioat.o 00:03:22.543 CC lib/dma/dma.o 00:03:22.543 CC lib/util/base64.o 00:03:22.543 CC lib/util/bit_array.o 00:03:22.543 CC lib/util/cpuset.o 00:03:22.543 CC lib/util/crc16.o 00:03:22.543 CC lib/util/crc32.o 00:03:22.543 CC lib/util/crc32c.o 00:03:22.543 CC lib/vfio_user/host/vfio_user_pci.o 00:03:22.543 CC lib/util/crc32_ieee.o 00:03:22.543 CC lib/util/crc64.o 00:03:22.543 CC lib/vfio_user/host/vfio_user.o 00:03:22.543 CC lib/util/dif.o 00:03:22.543 LIB libspdk_dma.a 00:03:22.543 CC lib/util/fd.o 00:03:22.543 SO libspdk_dma.so.5.0 00:03:22.543 CC lib/util/fd_group.o 00:03:22.543 CC lib/util/file.o 00:03:22.543 CC lib/util/hexlify.o 00:03:22.543 SYMLINK libspdk_dma.so 00:03:22.543 CC lib/util/iov.o 00:03:22.543 CC lib/util/math.o 00:03:22.543 LIB libspdk_ioat.a 00:03:22.543 CC lib/util/net.o 00:03:22.543 SO libspdk_ioat.so.7.0 00:03:22.543 LIB libspdk_vfio_user.a 00:03:22.543 SYMLINK libspdk_ioat.so 00:03:22.543 SO libspdk_vfio_user.so.5.0 00:03:22.543 CC lib/util/pipe.o 00:03:22.543 CC lib/util/strerror_tls.o 00:03:22.543 CC lib/util/string.o 00:03:22.543 CC lib/util/uuid.o 00:03:22.543 CC lib/util/xor.o 00:03:22.543 SYMLINK libspdk_vfio_user.so 00:03:22.543 CC lib/util/zipf.o 00:03:22.543 CC lib/util/md5.o 00:03:22.543 LIB libspdk_util.a 00:03:22.543 SO libspdk_util.so.10.1 00:03:22.543 LIB libspdk_trace_parser.a 00:03:22.543 SO libspdk_trace_parser.so.6.0 00:03:22.543 SYMLINK libspdk_util.so 00:03:22.543 SYMLINK libspdk_trace_parser.so 00:03:22.543 CC lib/json/json_parse.o 00:03:22.543 CC lib/json/json_util.o 00:03:22.543 CC lib/vmd/vmd.o 00:03:22.543 CC lib/json/json_write.o 00:03:22.543 CC lib/conf/conf.o 00:03:22.543 CC lib/vmd/led.o 00:03:22.543 CC lib/rdma_utils/rdma_utils.o 00:03:22.543 CC lib/env_dpdk/memory.o 00:03:22.543 CC lib/env_dpdk/env.o 00:03:22.543 CC lib/idxd/idxd.o 00:03:22.543 CC lib/idxd/idxd_user.o 00:03:22.543 CC lib/idxd/idxd_kernel.o 00:03:22.543 LIB libspdk_rdma_utils.a 00:03:22.543 LIB libspdk_conf.a 00:03:22.543 CC lib/env_dpdk/pci.o 00:03:22.543 SO libspdk_rdma_utils.so.1.0 00:03:22.543 SO libspdk_conf.so.6.0 00:03:22.543 SYMLINK libspdk_rdma_utils.so 00:03:22.543 LIB libspdk_json.a 00:03:22.543 SYMLINK libspdk_conf.so 00:03:22.543 CC lib/env_dpdk/init.o 00:03:22.543 CC lib/env_dpdk/threads.o 00:03:22.543 SO libspdk_json.so.6.0 00:03:22.543 SYMLINK libspdk_json.so 00:03:22.543 CC lib/env_dpdk/pci_ioat.o 00:03:22.543 CC lib/env_dpdk/pci_virtio.o 00:03:22.543 CC lib/rdma_provider/common.o 00:03:22.801 CC lib/jsonrpc/jsonrpc_server.o 00:03:22.801 CC lib/env_dpdk/pci_vmd.o 00:03:22.801 CC lib/env_dpdk/pci_idxd.o 00:03:22.801 LIB libspdk_vmd.a 00:03:22.801 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:22.801 SO libspdk_vmd.so.6.0 00:03:22.801 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:22.801 CC lib/env_dpdk/pci_event.o 00:03:22.801 CC lib/jsonrpc/jsonrpc_client.o 00:03:22.801 SYMLINK libspdk_vmd.so 00:03:22.801 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:22.801 CC lib/env_dpdk/sigbus_handler.o 00:03:22.801 LIB libspdk_idxd.a 00:03:22.801 CC lib/env_dpdk/pci_dpdk.o 00:03:22.801 SO libspdk_idxd.so.12.1 00:03:22.801 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:22.801 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:23.060 SYMLINK libspdk_idxd.so 00:03:23.060 LIB libspdk_rdma_provider.a 00:03:23.060 SO libspdk_rdma_provider.so.7.0 00:03:23.060 SYMLINK libspdk_rdma_provider.so 00:03:23.060 LIB libspdk_jsonrpc.a 00:03:23.060 SO libspdk_jsonrpc.so.6.0 00:03:23.060 SYMLINK libspdk_jsonrpc.so 00:03:23.317 CC lib/rpc/rpc.o 00:03:23.576 LIB libspdk_rpc.a 00:03:23.576 SO libspdk_rpc.so.6.0 00:03:23.576 SYMLINK libspdk_rpc.so 00:03:23.576 LIB libspdk_env_dpdk.a 00:03:23.576 SO libspdk_env_dpdk.so.15.1 00:03:23.833 CC lib/trace/trace_flags.o 00:03:23.833 CC lib/trace/trace.o 00:03:23.833 CC lib/trace/trace_rpc.o 00:03:23.833 CC lib/notify/notify.o 00:03:23.833 CC lib/notify/notify_rpc.o 00:03:23.833 CC lib/keyring/keyring.o 00:03:23.833 CC lib/keyring/keyring_rpc.o 00:03:23.833 SYMLINK libspdk_env_dpdk.so 00:03:23.833 LIB libspdk_notify.a 00:03:23.833 SO libspdk_notify.so.6.0 00:03:23.833 SYMLINK libspdk_notify.so 00:03:24.091 LIB libspdk_keyring.a 00:03:24.091 LIB libspdk_trace.a 00:03:24.091 SO libspdk_keyring.so.2.0 00:03:24.091 SO libspdk_trace.so.11.0 00:03:24.091 SYMLINK libspdk_keyring.so 00:03:24.091 SYMLINK libspdk_trace.so 00:03:24.348 CC lib/sock/sock.o 00:03:24.348 CC lib/sock/sock_rpc.o 00:03:24.348 CC lib/thread/thread.o 00:03:24.348 CC lib/thread/iobuf.o 00:03:24.606 LIB libspdk_sock.a 00:03:24.606 SO libspdk_sock.so.10.0 00:03:24.606 SYMLINK libspdk_sock.so 00:03:24.864 CC lib/nvme/nvme_ctrlr.o 00:03:24.864 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:24.864 CC lib/nvme/nvme_fabric.o 00:03:24.864 CC lib/nvme/nvme_pcie.o 00:03:24.864 CC lib/nvme/nvme.o 00:03:24.864 CC lib/nvme/nvme_qpair.o 00:03:24.864 CC lib/nvme/nvme_ns.o 00:03:24.864 CC lib/nvme/nvme_ns_cmd.o 00:03:24.864 CC lib/nvme/nvme_pcie_common.o 00:03:25.430 CC lib/nvme/nvme_quirks.o 00:03:25.430 CC lib/nvme/nvme_transport.o 00:03:25.689 CC lib/nvme/nvme_discovery.o 00:03:25.689 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:25.689 LIB libspdk_thread.a 00:03:25.689 SO libspdk_thread.so.11.0 00:03:25.689 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:25.689 CC lib/nvme/nvme_tcp.o 00:03:25.689 CC lib/nvme/nvme_opal.o 00:03:25.948 SYMLINK libspdk_thread.so 00:03:25.948 CC lib/nvme/nvme_io_msg.o 00:03:25.948 CC lib/accel/accel.o 00:03:25.948 CC lib/blob/blobstore.o 00:03:25.948 CC lib/blob/request.o 00:03:26.206 CC lib/nvme/nvme_poll_group.o 00:03:26.206 CC lib/init/json_config.o 00:03:26.206 CC lib/init/subsystem.o 00:03:26.206 CC lib/init/subsystem_rpc.o 00:03:26.465 CC lib/accel/accel_rpc.o 00:03:26.465 CC lib/accel/accel_sw.o 00:03:26.465 CC lib/blob/zeroes.o 00:03:26.465 CC lib/blob/blob_bs_dev.o 00:03:26.465 CC lib/init/rpc.o 00:03:26.465 CC lib/nvme/nvme_zns.o 00:03:26.723 LIB libspdk_init.a 00:03:26.723 SO libspdk_init.so.6.0 00:03:26.723 SYMLINK libspdk_init.so 00:03:26.723 CC lib/nvme/nvme_stubs.o 00:03:26.723 CC lib/nvme/nvme_auth.o 00:03:26.723 CC lib/virtio/virtio.o 00:03:26.723 CC lib/virtio/virtio_vhost_user.o 00:03:26.723 LIB libspdk_accel.a 00:03:26.723 CC lib/virtio/virtio_vfio_user.o 00:03:26.723 SO libspdk_accel.so.16.0 00:03:26.982 SYMLINK libspdk_accel.so 00:03:26.982 CC lib/virtio/virtio_pci.o 00:03:26.982 CC lib/fsdev/fsdev.o 00:03:26.982 CC lib/fsdev/fsdev_io.o 00:03:26.982 CC lib/nvme/nvme_cuse.o 00:03:26.982 CC lib/nvme/nvme_rdma.o 00:03:26.982 CC lib/bdev/bdev.o 00:03:27.241 CC lib/event/app.o 00:03:27.241 CC lib/bdev/bdev_rpc.o 00:03:27.241 LIB libspdk_virtio.a 00:03:27.241 SO libspdk_virtio.so.7.0 00:03:27.499 SYMLINK libspdk_virtio.so 00:03:27.499 CC lib/bdev/bdev_zone.o 00:03:27.499 CC lib/event/reactor.o 00:03:27.499 CC lib/bdev/part.o 00:03:27.499 CC lib/fsdev/fsdev_rpc.o 00:03:27.499 CC lib/bdev/scsi_nvme.o 00:03:27.499 CC lib/event/log_rpc.o 00:03:27.758 CC lib/event/app_rpc.o 00:03:27.758 LIB libspdk_fsdev.a 00:03:27.758 SO libspdk_fsdev.so.2.0 00:03:27.758 CC lib/event/scheduler_static.o 00:03:27.758 SYMLINK libspdk_fsdev.so 00:03:28.016 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:28.016 LIB libspdk_event.a 00:03:28.016 SO libspdk_event.so.14.0 00:03:28.016 SYMLINK libspdk_event.so 00:03:28.016 LIB libspdk_nvme.a 00:03:28.275 SO libspdk_nvme.so.15.0 00:03:28.533 SYMLINK libspdk_nvme.so 00:03:28.533 LIB libspdk_fuse_dispatcher.a 00:03:28.533 SO libspdk_fuse_dispatcher.so.1.0 00:03:28.533 SYMLINK libspdk_fuse_dispatcher.so 00:03:29.470 LIB libspdk_bdev.a 00:03:29.470 LIB libspdk_blob.a 00:03:29.470 SO libspdk_bdev.so.17.0 00:03:29.470 SO libspdk_blob.so.11.0 00:03:29.470 SYMLINK libspdk_bdev.so 00:03:29.470 SYMLINK libspdk_blob.so 00:03:29.470 CC lib/nvmf/ctrlr.o 00:03:29.470 CC lib/nvmf/ctrlr_discovery.o 00:03:29.470 CC lib/nvmf/ctrlr_bdev.o 00:03:29.470 CC lib/nvmf/subsystem.o 00:03:29.470 CC lib/scsi/dev.o 00:03:29.470 CC lib/nbd/nbd.o 00:03:29.470 CC lib/lvol/lvol.o 00:03:29.470 CC lib/blobfs/blobfs.o 00:03:29.470 CC lib/ftl/ftl_core.o 00:03:29.470 CC lib/ublk/ublk.o 00:03:29.729 CC lib/scsi/lun.o 00:03:29.729 CC lib/nbd/nbd_rpc.o 00:03:29.987 CC lib/ftl/ftl_init.o 00:03:29.987 LIB libspdk_nbd.a 00:03:29.987 SO libspdk_nbd.so.7.0 00:03:29.987 CC lib/scsi/port.o 00:03:29.987 SYMLINK libspdk_nbd.so 00:03:29.987 CC lib/ublk/ublk_rpc.o 00:03:29.987 CC lib/ftl/ftl_layout.o 00:03:29.987 CC lib/ftl/ftl_debug.o 00:03:30.247 CC lib/scsi/scsi.o 00:03:30.247 CC lib/scsi/scsi_bdev.o 00:03:30.247 LIB libspdk_ublk.a 00:03:30.247 SO libspdk_ublk.so.3.0 00:03:30.247 CC lib/scsi/scsi_pr.o 00:03:30.247 SYMLINK libspdk_ublk.so 00:03:30.247 CC lib/ftl/ftl_io.o 00:03:30.247 CC lib/scsi/scsi_rpc.o 00:03:30.247 CC lib/ftl/ftl_sb.o 00:03:30.247 CC lib/ftl/ftl_l2p.o 00:03:30.247 CC lib/ftl/ftl_l2p_flat.o 00:03:30.247 CC lib/blobfs/tree.o 00:03:30.505 CC lib/nvmf/nvmf.o 00:03:30.505 CC lib/ftl/ftl_nv_cache.o 00:03:30.505 LIB libspdk_lvol.a 00:03:30.505 CC lib/ftl/ftl_band.o 00:03:30.505 CC lib/ftl/ftl_band_ops.o 00:03:30.505 SO libspdk_lvol.so.10.0 00:03:30.505 LIB libspdk_blobfs.a 00:03:30.505 CC lib/ftl/ftl_writer.o 00:03:30.505 SO libspdk_blobfs.so.10.0 00:03:30.505 SYMLINK libspdk_lvol.so 00:03:30.505 CC lib/ftl/ftl_rq.o 00:03:30.505 SYMLINK libspdk_blobfs.so 00:03:30.505 CC lib/ftl/ftl_reloc.o 00:03:30.505 CC lib/scsi/task.o 00:03:30.764 CC lib/ftl/ftl_l2p_cache.o 00:03:30.764 CC lib/nvmf/nvmf_rpc.o 00:03:30.764 CC lib/ftl/ftl_p2l.o 00:03:30.764 LIB libspdk_scsi.a 00:03:30.764 CC lib/ftl/ftl_p2l_log.o 00:03:30.764 SO libspdk_scsi.so.9.0 00:03:30.764 CC lib/ftl/mngt/ftl_mngt.o 00:03:31.023 CC lib/nvmf/transport.o 00:03:31.023 CC lib/nvmf/tcp.o 00:03:31.023 SYMLINK libspdk_scsi.so 00:03:31.023 CC lib/nvmf/stubs.o 00:03:31.023 CC lib/nvmf/mdns_server.o 00:03:31.282 CC lib/nvmf/rdma.o 00:03:31.282 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:31.282 CC lib/nvmf/auth.o 00:03:31.282 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:31.282 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:31.282 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:31.541 CC lib/iscsi/conn.o 00:03:31.541 CC lib/vhost/vhost.o 00:03:31.541 CC lib/vhost/vhost_rpc.o 00:03:31.541 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:31.541 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:31.541 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:31.541 CC lib/vhost/vhost_scsi.o 00:03:31.541 CC lib/vhost/vhost_blk.o 00:03:31.799 CC lib/vhost/rte_vhost_user.o 00:03:31.799 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:31.799 CC lib/iscsi/init_grp.o 00:03:32.058 CC lib/iscsi/iscsi.o 00:03:32.058 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:32.058 CC lib/iscsi/param.o 00:03:32.058 CC lib/iscsi/portal_grp.o 00:03:32.058 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:32.058 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:32.369 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:32.369 CC lib/iscsi/tgt_node.o 00:03:32.369 CC lib/iscsi/iscsi_subsystem.o 00:03:32.369 CC lib/ftl/utils/ftl_conf.o 00:03:32.369 CC lib/ftl/utils/ftl_md.o 00:03:32.369 CC lib/iscsi/iscsi_rpc.o 00:03:32.369 CC lib/iscsi/task.o 00:03:32.369 CC lib/ftl/utils/ftl_mempool.o 00:03:32.629 CC lib/ftl/utils/ftl_bitmap.o 00:03:32.629 CC lib/ftl/utils/ftl_property.o 00:03:32.629 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:32.629 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:32.629 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:32.629 LIB libspdk_vhost.a 00:03:32.629 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:32.629 SO libspdk_vhost.so.8.0 00:03:32.629 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:32.629 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:32.629 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:32.629 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:32.629 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:32.629 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:32.888 SYMLINK libspdk_vhost.so 00:03:32.888 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:32.888 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:32.888 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:32.888 CC lib/ftl/base/ftl_base_dev.o 00:03:32.888 CC lib/ftl/base/ftl_base_bdev.o 00:03:32.888 CC lib/ftl/ftl_trace.o 00:03:33.146 LIB libspdk_nvmf.a 00:03:33.146 LIB libspdk_ftl.a 00:03:33.146 LIB libspdk_iscsi.a 00:03:33.146 SO libspdk_iscsi.so.8.0 00:03:33.146 SO libspdk_nvmf.so.20.0 00:03:33.146 SO libspdk_ftl.so.9.0 00:03:33.146 SYMLINK libspdk_iscsi.so 00:03:33.404 SYMLINK libspdk_nvmf.so 00:03:33.404 SYMLINK libspdk_ftl.so 00:03:33.663 CC module/env_dpdk/env_dpdk_rpc.o 00:03:33.663 CC module/sock/posix/posix.o 00:03:33.663 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:33.663 CC module/keyring/file/keyring.o 00:03:33.663 CC module/accel/error/accel_error.o 00:03:33.663 CC module/scheduler/gscheduler/gscheduler.o 00:03:33.663 CC module/blob/bdev/blob_bdev.o 00:03:33.663 CC module/accel/ioat/accel_ioat.o 00:03:33.663 CC module/fsdev/aio/fsdev_aio.o 00:03:33.663 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:33.663 LIB libspdk_env_dpdk_rpc.a 00:03:33.921 SO libspdk_env_dpdk_rpc.so.6.0 00:03:33.921 SYMLINK libspdk_env_dpdk_rpc.so 00:03:33.921 CC module/keyring/file/keyring_rpc.o 00:03:33.921 CC module/accel/ioat/accel_ioat_rpc.o 00:03:33.921 LIB libspdk_scheduler_dpdk_governor.a 00:03:33.921 LIB libspdk_scheduler_gscheduler.a 00:03:33.921 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:33.921 CC module/accel/error/accel_error_rpc.o 00:03:33.921 LIB libspdk_scheduler_dynamic.a 00:03:33.921 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:33.921 SO libspdk_scheduler_gscheduler.so.4.0 00:03:33.921 SO libspdk_scheduler_dynamic.so.4.0 00:03:33.921 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:33.921 SYMLINK libspdk_scheduler_gscheduler.so 00:03:33.921 CC module/fsdev/aio/linux_aio_mgr.o 00:03:33.921 LIB libspdk_blob_bdev.a 00:03:33.921 SYMLINK libspdk_scheduler_dynamic.so 00:03:33.921 SO libspdk_blob_bdev.so.11.0 00:03:33.921 LIB libspdk_accel_ioat.a 00:03:33.921 LIB libspdk_keyring_file.a 00:03:33.921 SO libspdk_accel_ioat.so.6.0 00:03:33.921 LIB libspdk_accel_error.a 00:03:33.921 SO libspdk_keyring_file.so.2.0 00:03:33.921 SYMLINK libspdk_blob_bdev.so 00:03:33.921 SO libspdk_accel_error.so.2.0 00:03:33.921 SYMLINK libspdk_accel_ioat.so 00:03:33.921 SYMLINK libspdk_keyring_file.so 00:03:34.180 SYMLINK libspdk_accel_error.so 00:03:34.180 CC module/accel/dsa/accel_dsa.o 00:03:34.180 CC module/accel/dsa/accel_dsa_rpc.o 00:03:34.180 CC module/keyring/linux/keyring.o 00:03:34.180 CC module/keyring/linux/keyring_rpc.o 00:03:34.180 CC module/accel/iaa/accel_iaa.o 00:03:34.180 CC module/bdev/delay/vbdev_delay.o 00:03:34.180 LIB libspdk_keyring_linux.a 00:03:34.180 CC module/bdev/error/vbdev_error.o 00:03:34.180 CC module/blobfs/bdev/blobfs_bdev.o 00:03:34.180 SO libspdk_keyring_linux.so.1.0 00:03:34.180 LIB libspdk_fsdev_aio.a 00:03:34.180 LIB libspdk_accel_dsa.a 00:03:34.180 SO libspdk_fsdev_aio.so.1.0 00:03:34.180 SYMLINK libspdk_keyring_linux.so 00:03:34.180 CC module/bdev/lvol/vbdev_lvol.o 00:03:34.180 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:34.438 CC module/bdev/gpt/gpt.o 00:03:34.438 CC module/accel/iaa/accel_iaa_rpc.o 00:03:34.438 SO libspdk_accel_dsa.so.5.0 00:03:34.438 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:34.438 SYMLINK libspdk_fsdev_aio.so 00:03:34.438 CC module/bdev/gpt/vbdev_gpt.o 00:03:34.438 SYMLINK libspdk_accel_dsa.so 00:03:34.438 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:34.438 LIB libspdk_sock_posix.a 00:03:34.438 LIB libspdk_accel_iaa.a 00:03:34.438 SO libspdk_sock_posix.so.6.0 00:03:34.438 SO libspdk_accel_iaa.so.3.0 00:03:34.438 LIB libspdk_blobfs_bdev.a 00:03:34.438 CC module/bdev/error/vbdev_error_rpc.o 00:03:34.439 SO libspdk_blobfs_bdev.so.6.0 00:03:34.439 SYMLINK libspdk_accel_iaa.so 00:03:34.439 SYMLINK libspdk_sock_posix.so 00:03:34.439 LIB libspdk_bdev_delay.a 00:03:34.439 SYMLINK libspdk_blobfs_bdev.so 00:03:34.439 LIB libspdk_bdev_gpt.a 00:03:34.439 SO libspdk_bdev_delay.so.6.0 00:03:34.698 SO libspdk_bdev_gpt.so.6.0 00:03:34.698 CC module/bdev/malloc/bdev_malloc.o 00:03:34.698 LIB libspdk_bdev_error.a 00:03:34.698 SYMLINK libspdk_bdev_delay.so 00:03:34.698 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:34.698 SO libspdk_bdev_error.so.6.0 00:03:34.698 CC module/bdev/passthru/vbdev_passthru.o 00:03:34.698 CC module/bdev/nvme/bdev_nvme.o 00:03:34.698 CC module/bdev/null/bdev_null.o 00:03:34.698 SYMLINK libspdk_bdev_gpt.so 00:03:34.698 CC module/bdev/null/bdev_null_rpc.o 00:03:34.698 CC module/bdev/raid/bdev_raid.o 00:03:34.698 SYMLINK libspdk_bdev_error.so 00:03:34.698 CC module/bdev/raid/bdev_raid_rpc.o 00:03:34.698 LIB libspdk_bdev_lvol.a 00:03:34.698 SO libspdk_bdev_lvol.so.6.0 00:03:34.698 SYMLINK libspdk_bdev_lvol.so 00:03:34.698 CC module/bdev/raid/bdev_raid_sb.o 00:03:34.698 CC module/bdev/split/vbdev_split.o 00:03:34.698 LIB libspdk_bdev_null.a 00:03:34.957 SO libspdk_bdev_null.so.6.0 00:03:34.957 CC module/bdev/split/vbdev_split_rpc.o 00:03:34.957 SYMLINK libspdk_bdev_null.so 00:03:34.957 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:34.957 CC module/bdev/xnvme/bdev_xnvme.o 00:03:34.957 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:34.957 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:34.957 LIB libspdk_bdev_malloc.a 00:03:34.957 CC module/bdev/raid/raid0.o 00:03:34.957 SO libspdk_bdev_malloc.so.6.0 00:03:34.957 CC module/bdev/aio/bdev_aio.o 00:03:34.957 LIB libspdk_bdev_split.a 00:03:34.957 LIB libspdk_bdev_passthru.a 00:03:34.957 SYMLINK libspdk_bdev_malloc.so 00:03:34.957 CC module/bdev/aio/bdev_aio_rpc.o 00:03:34.957 SO libspdk_bdev_passthru.so.6.0 00:03:34.957 SO libspdk_bdev_split.so.6.0 00:03:35.215 SYMLINK libspdk_bdev_passthru.so 00:03:35.215 SYMLINK libspdk_bdev_split.so 00:03:35.215 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:35.215 CC module/bdev/nvme/nvme_rpc.o 00:03:35.215 CC module/bdev/nvme/bdev_mdns_client.o 00:03:35.215 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:03:35.215 CC module/bdev/nvme/vbdev_opal.o 00:03:35.215 LIB libspdk_bdev_zone_block.a 00:03:35.215 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:35.215 LIB libspdk_bdev_aio.a 00:03:35.215 SO libspdk_bdev_zone_block.so.6.0 00:03:35.215 SO libspdk_bdev_aio.so.6.0 00:03:35.215 LIB libspdk_bdev_xnvme.a 00:03:35.215 SYMLINK libspdk_bdev_zone_block.so 00:03:35.215 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:35.215 SO libspdk_bdev_xnvme.so.3.0 00:03:35.215 SYMLINK libspdk_bdev_aio.so 00:03:35.215 CC module/bdev/raid/raid1.o 00:03:35.215 CC module/bdev/ftl/bdev_ftl.o 00:03:35.215 CC module/bdev/raid/concat.o 00:03:35.473 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:35.473 SYMLINK libspdk_bdev_xnvme.so 00:03:35.473 CC module/bdev/iscsi/bdev_iscsi.o 00:03:35.473 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:35.473 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:35.473 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:35.473 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:35.732 LIB libspdk_bdev_ftl.a 00:03:35.732 LIB libspdk_bdev_raid.a 00:03:35.732 SO libspdk_bdev_ftl.so.6.0 00:03:35.732 SO libspdk_bdev_raid.so.6.0 00:03:35.732 SYMLINK libspdk_bdev_ftl.so 00:03:35.732 SYMLINK libspdk_bdev_raid.so 00:03:35.732 LIB libspdk_bdev_iscsi.a 00:03:35.732 SO libspdk_bdev_iscsi.so.6.0 00:03:35.991 SYMLINK libspdk_bdev_iscsi.so 00:03:35.991 LIB libspdk_bdev_virtio.a 00:03:35.991 SO libspdk_bdev_virtio.so.6.0 00:03:35.991 SYMLINK libspdk_bdev_virtio.so 00:03:36.928 LIB libspdk_bdev_nvme.a 00:03:36.928 SO libspdk_bdev_nvme.so.7.1 00:03:36.928 SYMLINK libspdk_bdev_nvme.so 00:03:37.187 CC module/event/subsystems/sock/sock.o 00:03:37.187 CC module/event/subsystems/iobuf/iobuf.o 00:03:37.187 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:37.187 CC module/event/subsystems/vmd/vmd.o 00:03:37.187 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:37.187 CC module/event/subsystems/keyring/keyring.o 00:03:37.187 CC module/event/subsystems/scheduler/scheduler.o 00:03:37.187 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:37.446 CC module/event/subsystems/fsdev/fsdev.o 00:03:37.446 LIB libspdk_event_sock.a 00:03:37.446 LIB libspdk_event_keyring.a 00:03:37.446 LIB libspdk_event_vhost_blk.a 00:03:37.446 SO libspdk_event_sock.so.5.0 00:03:37.446 LIB libspdk_event_scheduler.a 00:03:37.446 LIB libspdk_event_vmd.a 00:03:37.446 LIB libspdk_event_iobuf.a 00:03:37.446 SO libspdk_event_keyring.so.1.0 00:03:37.446 SO libspdk_event_vhost_blk.so.3.0 00:03:37.446 SO libspdk_event_scheduler.so.4.0 00:03:37.446 SO libspdk_event_vmd.so.6.0 00:03:37.446 LIB libspdk_event_fsdev.a 00:03:37.446 SO libspdk_event_iobuf.so.3.0 00:03:37.446 SO libspdk_event_fsdev.so.1.0 00:03:37.446 SYMLINK libspdk_event_sock.so 00:03:37.446 SYMLINK libspdk_event_keyring.so 00:03:37.446 SYMLINK libspdk_event_vhost_blk.so 00:03:37.446 SYMLINK libspdk_event_scheduler.so 00:03:37.446 SYMLINK libspdk_event_vmd.so 00:03:37.446 SYMLINK libspdk_event_iobuf.so 00:03:37.446 SYMLINK libspdk_event_fsdev.so 00:03:37.704 CC module/event/subsystems/accel/accel.o 00:03:37.704 LIB libspdk_event_accel.a 00:03:37.965 SO libspdk_event_accel.so.6.0 00:03:37.965 SYMLINK libspdk_event_accel.so 00:03:38.225 CC module/event/subsystems/bdev/bdev.o 00:03:38.225 LIB libspdk_event_bdev.a 00:03:38.225 SO libspdk_event_bdev.so.6.0 00:03:38.484 SYMLINK libspdk_event_bdev.so 00:03:38.484 CC module/event/subsystems/scsi/scsi.o 00:03:38.484 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:38.484 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:38.484 CC module/event/subsystems/ublk/ublk.o 00:03:38.484 CC module/event/subsystems/nbd/nbd.o 00:03:38.484 LIB libspdk_event_scsi.a 00:03:38.484 LIB libspdk_event_nbd.a 00:03:38.484 LIB libspdk_event_ublk.a 00:03:38.744 SO libspdk_event_scsi.so.6.0 00:03:38.744 SO libspdk_event_ublk.so.3.0 00:03:38.744 SO libspdk_event_nbd.so.6.0 00:03:38.744 SYMLINK libspdk_event_ublk.so 00:03:38.744 SYMLINK libspdk_event_scsi.so 00:03:38.744 SYMLINK libspdk_event_nbd.so 00:03:38.744 LIB libspdk_event_nvmf.a 00:03:38.744 SO libspdk_event_nvmf.so.6.0 00:03:38.744 SYMLINK libspdk_event_nvmf.so 00:03:38.744 CC module/event/subsystems/iscsi/iscsi.o 00:03:38.744 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:39.004 LIB libspdk_event_vhost_scsi.a 00:03:39.004 LIB libspdk_event_iscsi.a 00:03:39.004 SO libspdk_event_vhost_scsi.so.3.0 00:03:39.004 SO libspdk_event_iscsi.so.6.0 00:03:39.004 SYMLINK libspdk_event_vhost_scsi.so 00:03:39.004 SYMLINK libspdk_event_iscsi.so 00:03:39.261 SO libspdk.so.6.0 00:03:39.261 SYMLINK libspdk.so 00:03:39.261 CC app/trace_record/trace_record.o 00:03:39.261 CXX app/trace/trace.o 00:03:39.261 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:39.519 CC app/iscsi_tgt/iscsi_tgt.o 00:03:39.519 CC app/nvmf_tgt/nvmf_main.o 00:03:39.519 CC examples/util/zipf/zipf.o 00:03:39.519 CC app/spdk_tgt/spdk_tgt.o 00:03:39.519 CC test/thread/poller_perf/poller_perf.o 00:03:39.519 CC examples/ioat/perf/perf.o 00:03:39.519 CC test/dma/test_dma/test_dma.o 00:03:39.519 LINK interrupt_tgt 00:03:39.519 LINK poller_perf 00:03:39.519 LINK nvmf_tgt 00:03:39.519 LINK iscsi_tgt 00:03:39.519 LINK zipf 00:03:39.519 LINK spdk_trace_record 00:03:39.519 LINK spdk_tgt 00:03:39.778 LINK ioat_perf 00:03:39.778 LINK spdk_trace 00:03:39.778 TEST_HEADER include/spdk/accel.h 00:03:39.778 TEST_HEADER include/spdk/accel_module.h 00:03:39.778 TEST_HEADER include/spdk/assert.h 00:03:39.778 TEST_HEADER include/spdk/barrier.h 00:03:39.778 TEST_HEADER include/spdk/base64.h 00:03:39.778 TEST_HEADER include/spdk/bdev.h 00:03:39.778 TEST_HEADER include/spdk/bdev_module.h 00:03:39.778 TEST_HEADER include/spdk/bdev_zone.h 00:03:39.778 TEST_HEADER include/spdk/bit_array.h 00:03:39.778 TEST_HEADER include/spdk/bit_pool.h 00:03:39.778 TEST_HEADER include/spdk/blob_bdev.h 00:03:39.778 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:39.778 TEST_HEADER include/spdk/blobfs.h 00:03:39.778 TEST_HEADER include/spdk/blob.h 00:03:39.778 TEST_HEADER include/spdk/conf.h 00:03:39.778 TEST_HEADER include/spdk/config.h 00:03:39.778 TEST_HEADER include/spdk/cpuset.h 00:03:39.778 TEST_HEADER include/spdk/crc16.h 00:03:39.778 TEST_HEADER include/spdk/crc32.h 00:03:39.778 TEST_HEADER include/spdk/crc64.h 00:03:39.778 TEST_HEADER include/spdk/dif.h 00:03:39.778 TEST_HEADER include/spdk/dma.h 00:03:39.778 TEST_HEADER include/spdk/endian.h 00:03:39.778 TEST_HEADER include/spdk/env_dpdk.h 00:03:39.778 TEST_HEADER include/spdk/env.h 00:03:39.778 TEST_HEADER include/spdk/event.h 00:03:39.778 TEST_HEADER include/spdk/fd_group.h 00:03:39.778 TEST_HEADER include/spdk/fd.h 00:03:39.778 TEST_HEADER include/spdk/file.h 00:03:39.778 TEST_HEADER include/spdk/fsdev.h 00:03:39.778 TEST_HEADER include/spdk/fsdev_module.h 00:03:39.778 TEST_HEADER include/spdk/ftl.h 00:03:39.778 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:39.778 TEST_HEADER include/spdk/gpt_spec.h 00:03:39.778 CC app/spdk_lspci/spdk_lspci.o 00:03:39.778 TEST_HEADER include/spdk/hexlify.h 00:03:39.778 TEST_HEADER include/spdk/histogram_data.h 00:03:39.778 TEST_HEADER include/spdk/idxd.h 00:03:39.778 TEST_HEADER include/spdk/idxd_spec.h 00:03:39.778 TEST_HEADER include/spdk/init.h 00:03:39.778 TEST_HEADER include/spdk/ioat.h 00:03:39.778 TEST_HEADER include/spdk/ioat_spec.h 00:03:39.778 TEST_HEADER include/spdk/iscsi_spec.h 00:03:39.778 TEST_HEADER include/spdk/json.h 00:03:39.778 TEST_HEADER include/spdk/jsonrpc.h 00:03:39.778 TEST_HEADER include/spdk/keyring.h 00:03:39.778 TEST_HEADER include/spdk/keyring_module.h 00:03:39.778 TEST_HEADER include/spdk/likely.h 00:03:39.778 CC test/app/bdev_svc/bdev_svc.o 00:03:39.778 TEST_HEADER include/spdk/log.h 00:03:39.778 TEST_HEADER include/spdk/lvol.h 00:03:39.778 TEST_HEADER include/spdk/md5.h 00:03:39.778 TEST_HEADER include/spdk/memory.h 00:03:39.778 TEST_HEADER include/spdk/mmio.h 00:03:39.778 TEST_HEADER include/spdk/nbd.h 00:03:39.778 TEST_HEADER include/spdk/net.h 00:03:39.778 TEST_HEADER include/spdk/notify.h 00:03:39.778 TEST_HEADER include/spdk/nvme.h 00:03:39.778 TEST_HEADER include/spdk/nvme_intel.h 00:03:39.778 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:39.778 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:39.778 TEST_HEADER include/spdk/nvme_spec.h 00:03:39.778 TEST_HEADER include/spdk/nvme_zns.h 00:03:39.778 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:39.778 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:39.778 TEST_HEADER include/spdk/nvmf.h 00:03:39.778 TEST_HEADER include/spdk/nvmf_spec.h 00:03:39.778 TEST_HEADER include/spdk/nvmf_transport.h 00:03:39.778 CC test/env/vtophys/vtophys.o 00:03:39.778 TEST_HEADER include/spdk/opal.h 00:03:39.778 TEST_HEADER include/spdk/opal_spec.h 00:03:39.778 TEST_HEADER include/spdk/pci_ids.h 00:03:39.778 TEST_HEADER include/spdk/pipe.h 00:03:39.778 TEST_HEADER include/spdk/queue.h 00:03:39.778 CC examples/ioat/verify/verify.o 00:03:39.778 TEST_HEADER include/spdk/reduce.h 00:03:39.778 TEST_HEADER include/spdk/rpc.h 00:03:39.778 TEST_HEADER include/spdk/scheduler.h 00:03:39.778 TEST_HEADER include/spdk/scsi.h 00:03:39.778 TEST_HEADER include/spdk/scsi_spec.h 00:03:39.778 TEST_HEADER include/spdk/sock.h 00:03:39.778 TEST_HEADER include/spdk/stdinc.h 00:03:39.778 CC test/event/event_perf/event_perf.o 00:03:39.778 TEST_HEADER include/spdk/string.h 00:03:39.778 TEST_HEADER include/spdk/thread.h 00:03:39.778 TEST_HEADER include/spdk/trace.h 00:03:39.778 TEST_HEADER include/spdk/trace_parser.h 00:03:39.778 TEST_HEADER include/spdk/tree.h 00:03:39.778 TEST_HEADER include/spdk/ublk.h 00:03:39.778 TEST_HEADER include/spdk/util.h 00:03:39.778 TEST_HEADER include/spdk/uuid.h 00:03:39.778 TEST_HEADER include/spdk/version.h 00:03:39.778 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:39.778 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:39.778 CC test/env/mem_callbacks/mem_callbacks.o 00:03:39.778 TEST_HEADER include/spdk/vhost.h 00:03:39.778 TEST_HEADER include/spdk/vmd.h 00:03:39.778 TEST_HEADER include/spdk/xor.h 00:03:39.778 TEST_HEADER include/spdk/zipf.h 00:03:39.778 CXX test/cpp_headers/accel.o 00:03:39.778 CC test/rpc_client/rpc_client_test.o 00:03:39.778 LINK spdk_lspci 00:03:40.037 LINK test_dma 00:03:40.037 LINK vtophys 00:03:40.037 LINK bdev_svc 00:03:40.037 CC examples/thread/thread/thread_ex.o 00:03:40.037 LINK event_perf 00:03:40.037 CXX test/cpp_headers/accel_module.o 00:03:40.037 LINK verify 00:03:40.037 LINK rpc_client_test 00:03:40.037 CC app/spdk_nvme_perf/perf.o 00:03:40.037 CC app/spdk_nvme_identify/identify.o 00:03:40.037 CXX test/cpp_headers/assert.o 00:03:40.037 CC test/event/reactor/reactor.o 00:03:40.037 CC test/event/reactor_perf/reactor_perf.o 00:03:40.296 CC test/event/app_repeat/app_repeat.o 00:03:40.296 LINK thread 00:03:40.296 CC test/app/histogram_perf/histogram_perf.o 00:03:40.296 LINK mem_callbacks 00:03:40.296 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:40.296 LINK reactor_perf 00:03:40.296 LINK reactor 00:03:40.296 CXX test/cpp_headers/barrier.o 00:03:40.296 LINK histogram_perf 00:03:40.296 LINK app_repeat 00:03:40.554 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:40.554 CC test/env/memory/memory_ut.o 00:03:40.554 CXX test/cpp_headers/base64.o 00:03:40.554 CC examples/sock/hello_world/hello_sock.o 00:03:40.554 CC examples/vmd/lsvmd/lsvmd.o 00:03:40.554 LINK env_dpdk_post_init 00:03:40.554 CXX test/cpp_headers/bdev.o 00:03:40.554 LINK nvme_fuzz 00:03:40.554 CC test/event/scheduler/scheduler.o 00:03:40.554 CC examples/idxd/perf/perf.o 00:03:40.813 LINK lsvmd 00:03:40.813 LINK hello_sock 00:03:40.813 LINK spdk_nvme_perf 00:03:40.813 CXX test/cpp_headers/bdev_module.o 00:03:40.813 LINK scheduler 00:03:40.813 CXX test/cpp_headers/bdev_zone.o 00:03:40.813 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:40.813 CC examples/vmd/led/led.o 00:03:40.813 LINK spdk_nvme_identify 00:03:41.072 CC test/accel/dif/dif.o 00:03:41.072 LINK idxd_perf 00:03:41.072 CXX test/cpp_headers/bit_array.o 00:03:41.072 LINK led 00:03:41.072 CC test/app/jsoncat/jsoncat.o 00:03:41.072 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:41.072 CC examples/accel/perf/accel_perf.o 00:03:41.072 CC app/spdk_nvme_discover/discovery_aer.o 00:03:41.072 CXX test/cpp_headers/bit_pool.o 00:03:41.332 CXX test/cpp_headers/blob_bdev.o 00:03:41.332 LINK jsoncat 00:03:41.332 CC test/app/stub/stub.o 00:03:41.332 LINK hello_fsdev 00:03:41.332 LINK spdk_nvme_discover 00:03:41.332 CXX test/cpp_headers/blobfs_bdev.o 00:03:41.332 LINK stub 00:03:41.590 CC test/blobfs/mkfs/mkfs.o 00:03:41.590 CXX test/cpp_headers/blobfs.o 00:03:41.590 CC test/lvol/esnap/esnap.o 00:03:41.590 LINK memory_ut 00:03:41.590 CXX test/cpp_headers/blob.o 00:03:41.590 CC app/spdk_top/spdk_top.o 00:03:41.590 LINK accel_perf 00:03:41.590 LINK dif 00:03:41.590 CC test/nvme/aer/aer.o 00:03:41.590 CXX test/cpp_headers/conf.o 00:03:41.848 LINK mkfs 00:03:41.848 CC test/nvme/reset/reset.o 00:03:41.848 CC test/env/pci/pci_ut.o 00:03:41.848 CXX test/cpp_headers/config.o 00:03:41.848 CXX test/cpp_headers/cpuset.o 00:03:41.848 CC test/nvme/sgl/sgl.o 00:03:41.848 CC examples/blob/hello_world/hello_blob.o 00:03:42.106 LINK aer 00:03:42.106 CXX test/cpp_headers/crc16.o 00:03:42.106 LINK reset 00:03:42.106 CC test/bdev/bdevio/bdevio.o 00:03:42.106 LINK hello_blob 00:03:42.106 LINK pci_ut 00:03:42.106 CC examples/blob/cli/blobcli.o 00:03:42.106 CC test/nvme/e2edp/nvme_dp.o 00:03:42.365 CXX test/cpp_headers/crc32.o 00:03:42.365 LINK sgl 00:03:42.365 CXX test/cpp_headers/crc64.o 00:03:42.365 CXX test/cpp_headers/dif.o 00:03:42.365 CC app/vhost/vhost.o 00:03:42.365 LINK bdevio 00:03:42.365 CC test/nvme/overhead/overhead.o 00:03:42.365 LINK iscsi_fuzz 00:03:42.365 LINK nvme_dp 00:03:42.624 CXX test/cpp_headers/dma.o 00:03:42.624 LINK spdk_top 00:03:42.624 LINK vhost 00:03:42.624 CC examples/nvme/hello_world/hello_world.o 00:03:42.624 CXX test/cpp_headers/endian.o 00:03:42.624 CXX test/cpp_headers/env_dpdk.o 00:03:42.624 CC examples/nvme/reconnect/reconnect.o 00:03:42.624 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:42.624 LINK blobcli 00:03:42.624 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:42.624 LINK overhead 00:03:42.883 CXX test/cpp_headers/env.o 00:03:42.883 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:42.883 LINK hello_world 00:03:42.883 CC app/spdk_dd/spdk_dd.o 00:03:42.883 CC test/nvme/err_injection/err_injection.o 00:03:42.883 CC app/fio/nvme/fio_plugin.o 00:03:42.883 CXX test/cpp_headers/event.o 00:03:42.883 CXX test/cpp_headers/fd_group.o 00:03:42.883 LINK reconnect 00:03:42.883 CC examples/bdev/hello_world/hello_bdev.o 00:03:43.141 LINK err_injection 00:03:43.141 CC examples/nvme/arbitration/arbitration.o 00:03:43.141 CXX test/cpp_headers/fd.o 00:03:43.141 LINK nvme_manage 00:03:43.141 CC app/fio/bdev/fio_plugin.o 00:03:43.141 LINK vhost_fuzz 00:03:43.141 LINK hello_bdev 00:03:43.141 LINK spdk_dd 00:03:43.141 CXX test/cpp_headers/file.o 00:03:43.141 CC test/nvme/startup/startup.o 00:03:43.400 CXX test/cpp_headers/fsdev.o 00:03:43.400 CXX test/cpp_headers/fsdev_module.o 00:03:43.400 LINK spdk_nvme 00:03:43.400 LINK arbitration 00:03:43.400 LINK startup 00:03:43.400 CXX test/cpp_headers/ftl.o 00:03:43.400 CC test/nvme/reserve/reserve.o 00:03:43.400 CC examples/nvme/hotplug/hotplug.o 00:03:43.400 CC examples/bdev/bdevperf/bdevperf.o 00:03:43.400 CXX test/cpp_headers/fuse_dispatcher.o 00:03:43.400 CC test/nvme/simple_copy/simple_copy.o 00:03:43.659 CXX test/cpp_headers/gpt_spec.o 00:03:43.659 CXX test/cpp_headers/hexlify.o 00:03:43.659 LINK hotplug 00:03:43.659 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:43.659 CC test/nvme/connect_stress/connect_stress.o 00:03:43.659 LINK reserve 00:03:43.659 LINK spdk_bdev 00:03:43.659 LINK simple_copy 00:03:43.659 CXX test/cpp_headers/histogram_data.o 00:03:43.659 LINK cmb_copy 00:03:43.659 CC test/nvme/boot_partition/boot_partition.o 00:03:43.659 LINK connect_stress 00:03:43.659 CC examples/nvme/abort/abort.o 00:03:43.659 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:43.918 CC test/nvme/compliance/nvme_compliance.o 00:03:43.918 CXX test/cpp_headers/idxd.o 00:03:43.918 CXX test/cpp_headers/idxd_spec.o 00:03:43.918 CC test/nvme/fused_ordering/fused_ordering.o 00:03:43.918 CXX test/cpp_headers/init.o 00:03:43.918 LINK boot_partition 00:03:43.918 LINK pmr_persistence 00:03:43.918 CXX test/cpp_headers/ioat.o 00:03:43.918 CXX test/cpp_headers/ioat_spec.o 00:03:43.918 LINK fused_ordering 00:03:44.247 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:44.247 CC test/nvme/fdp/fdp.o 00:03:44.247 LINK nvme_compliance 00:03:44.247 CXX test/cpp_headers/iscsi_spec.o 00:03:44.247 CXX test/cpp_headers/json.o 00:03:44.247 CXX test/cpp_headers/jsonrpc.o 00:03:44.247 CC test/nvme/cuse/cuse.o 00:03:44.247 LINK abort 00:03:44.247 LINK doorbell_aers 00:03:44.247 CXX test/cpp_headers/keyring.o 00:03:44.247 CXX test/cpp_headers/keyring_module.o 00:03:44.247 CXX test/cpp_headers/likely.o 00:03:44.247 LINK bdevperf 00:03:44.247 CXX test/cpp_headers/log.o 00:03:44.247 CXX test/cpp_headers/lvol.o 00:03:44.247 CXX test/cpp_headers/md5.o 00:03:44.247 CXX test/cpp_headers/memory.o 00:03:44.247 CXX test/cpp_headers/mmio.o 00:03:44.247 LINK fdp 00:03:44.247 CXX test/cpp_headers/nbd.o 00:03:44.506 CXX test/cpp_headers/net.o 00:03:44.506 CXX test/cpp_headers/notify.o 00:03:44.506 CXX test/cpp_headers/nvme.o 00:03:44.506 CXX test/cpp_headers/nvme_intel.o 00:03:44.506 CXX test/cpp_headers/nvme_ocssd.o 00:03:44.506 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:44.506 CXX test/cpp_headers/nvme_spec.o 00:03:44.506 CC examples/nvmf/nvmf/nvmf.o 00:03:44.506 CXX test/cpp_headers/nvme_zns.o 00:03:44.506 CXX test/cpp_headers/nvmf_cmd.o 00:03:44.506 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:44.506 CXX test/cpp_headers/nvmf.o 00:03:44.506 CXX test/cpp_headers/nvmf_spec.o 00:03:44.506 CXX test/cpp_headers/nvmf_transport.o 00:03:44.506 CXX test/cpp_headers/opal.o 00:03:44.765 CXX test/cpp_headers/opal_spec.o 00:03:44.765 CXX test/cpp_headers/pci_ids.o 00:03:44.765 CXX test/cpp_headers/pipe.o 00:03:44.765 CXX test/cpp_headers/queue.o 00:03:44.765 CXX test/cpp_headers/reduce.o 00:03:44.765 CXX test/cpp_headers/rpc.o 00:03:44.765 CXX test/cpp_headers/scheduler.o 00:03:44.766 LINK nvmf 00:03:44.766 CXX test/cpp_headers/scsi.o 00:03:44.766 CXX test/cpp_headers/scsi_spec.o 00:03:44.766 CXX test/cpp_headers/sock.o 00:03:44.766 CXX test/cpp_headers/stdinc.o 00:03:44.766 CXX test/cpp_headers/string.o 00:03:44.766 CXX test/cpp_headers/thread.o 00:03:44.766 CXX test/cpp_headers/trace.o 00:03:44.766 CXX test/cpp_headers/trace_parser.o 00:03:44.766 CXX test/cpp_headers/tree.o 00:03:45.026 CXX test/cpp_headers/ublk.o 00:03:45.026 CXX test/cpp_headers/util.o 00:03:45.026 CXX test/cpp_headers/uuid.o 00:03:45.026 CXX test/cpp_headers/version.o 00:03:45.026 CXX test/cpp_headers/vfio_user_pci.o 00:03:45.026 CXX test/cpp_headers/vfio_user_spec.o 00:03:45.026 CXX test/cpp_headers/vhost.o 00:03:45.026 CXX test/cpp_headers/vmd.o 00:03:45.026 CXX test/cpp_headers/xor.o 00:03:45.026 CXX test/cpp_headers/zipf.o 00:03:45.286 LINK cuse 00:03:46.228 LINK esnap 00:03:46.487 00:03:46.487 real 1m1.643s 00:03:46.487 user 5m56.372s 00:03:46.487 sys 1m1.767s 00:03:46.487 18:13:04 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:46.487 18:13:04 make -- common/autotest_common.sh@10 -- $ set +x 00:03:46.487 ************************************ 00:03:46.487 END TEST make 00:03:46.488 ************************************ 00:03:46.488 18:13:04 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:46.488 18:13:04 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:46.488 18:13:04 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:46.488 18:13:04 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:46.488 18:13:04 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:46.488 18:13:04 -- pm/common@44 -- $ pid=5070 00:03:46.488 18:13:04 -- pm/common@50 -- $ kill -TERM 5070 00:03:46.488 18:13:04 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:46.488 18:13:04 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:46.488 18:13:04 -- pm/common@44 -- $ pid=5071 00:03:46.488 18:13:04 -- pm/common@50 -- $ kill -TERM 5071 00:03:46.488 18:13:04 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:46.488 18:13:04 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:46.488 18:13:04 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:46.488 18:13:04 -- common/autotest_common.sh@1693 -- # lcov --version 00:03:46.488 18:13:04 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:46.488 18:13:05 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:46.488 18:13:05 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:46.488 18:13:05 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:46.488 18:13:05 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:46.488 18:13:05 -- scripts/common.sh@336 -- # IFS=.-: 00:03:46.488 18:13:05 -- scripts/common.sh@336 -- # read -ra ver1 00:03:46.488 18:13:05 -- scripts/common.sh@337 -- # IFS=.-: 00:03:46.488 18:13:05 -- scripts/common.sh@337 -- # read -ra ver2 00:03:46.488 18:13:05 -- scripts/common.sh@338 -- # local 'op=<' 00:03:46.488 18:13:05 -- scripts/common.sh@340 -- # ver1_l=2 00:03:46.488 18:13:05 -- scripts/common.sh@341 -- # ver2_l=1 00:03:46.488 18:13:05 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:46.488 18:13:05 -- scripts/common.sh@344 -- # case "$op" in 00:03:46.488 18:13:05 -- scripts/common.sh@345 -- # : 1 00:03:46.488 18:13:05 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:46.488 18:13:05 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:46.488 18:13:05 -- scripts/common.sh@365 -- # decimal 1 00:03:46.488 18:13:05 -- scripts/common.sh@353 -- # local d=1 00:03:46.488 18:13:05 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:46.488 18:13:05 -- scripts/common.sh@355 -- # echo 1 00:03:46.488 18:13:05 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:46.488 18:13:05 -- scripts/common.sh@366 -- # decimal 2 00:03:46.488 18:13:05 -- scripts/common.sh@353 -- # local d=2 00:03:46.488 18:13:05 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:46.488 18:13:05 -- scripts/common.sh@355 -- # echo 2 00:03:46.488 18:13:05 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:46.488 18:13:05 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:46.488 18:13:05 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:46.488 18:13:05 -- scripts/common.sh@368 -- # return 0 00:03:46.488 18:13:05 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:46.488 18:13:05 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:46.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:46.488 --rc genhtml_branch_coverage=1 00:03:46.488 --rc genhtml_function_coverage=1 00:03:46.488 --rc genhtml_legend=1 00:03:46.488 --rc geninfo_all_blocks=1 00:03:46.488 --rc geninfo_unexecuted_blocks=1 00:03:46.488 00:03:46.488 ' 00:03:46.488 18:13:05 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:46.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:46.488 --rc genhtml_branch_coverage=1 00:03:46.488 --rc genhtml_function_coverage=1 00:03:46.488 --rc genhtml_legend=1 00:03:46.488 --rc geninfo_all_blocks=1 00:03:46.488 --rc geninfo_unexecuted_blocks=1 00:03:46.488 00:03:46.488 ' 00:03:46.488 18:13:05 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:46.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:46.488 --rc genhtml_branch_coverage=1 00:03:46.488 --rc genhtml_function_coverage=1 00:03:46.488 --rc genhtml_legend=1 00:03:46.488 --rc geninfo_all_blocks=1 00:03:46.488 --rc geninfo_unexecuted_blocks=1 00:03:46.488 00:03:46.488 ' 00:03:46.488 18:13:05 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:46.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:46.488 --rc genhtml_branch_coverage=1 00:03:46.488 --rc genhtml_function_coverage=1 00:03:46.488 --rc genhtml_legend=1 00:03:46.488 --rc geninfo_all_blocks=1 00:03:46.488 --rc geninfo_unexecuted_blocks=1 00:03:46.488 00:03:46.488 ' 00:03:46.488 18:13:05 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:46.488 18:13:05 -- nvmf/common.sh@7 -- # uname -s 00:03:46.488 18:13:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:46.488 18:13:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:46.488 18:13:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:46.488 18:13:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:46.488 18:13:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:46.488 18:13:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:46.488 18:13:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:46.488 18:13:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:46.488 18:13:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:46.488 18:13:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:46.488 18:13:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:aed9d235-300d-4f9e-8b5d-d05d02d268cd 00:03:46.488 18:13:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=aed9d235-300d-4f9e-8b5d-d05d02d268cd 00:03:46.488 18:13:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:46.488 18:13:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:46.488 18:13:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:46.488 18:13:05 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:46.488 18:13:05 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:46.488 18:13:05 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:46.488 18:13:05 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:46.488 18:13:05 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:46.488 18:13:05 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:46.488 18:13:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:46.488 18:13:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:46.488 18:13:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:46.488 18:13:05 -- paths/export.sh@5 -- # export PATH 00:03:46.488 18:13:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:46.488 18:13:05 -- nvmf/common.sh@51 -- # : 0 00:03:46.488 18:13:05 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:46.488 18:13:05 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:46.488 18:13:05 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:46.488 18:13:05 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:46.488 18:13:05 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:46.488 18:13:05 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:46.488 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:46.488 18:13:05 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:46.488 18:13:05 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:46.488 18:13:05 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:46.488 18:13:05 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:46.488 18:13:05 -- spdk/autotest.sh@32 -- # uname -s 00:03:46.488 18:13:05 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:46.488 18:13:05 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:46.488 18:13:05 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:46.488 18:13:05 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:46.488 18:13:05 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:46.488 18:13:05 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:46.488 18:13:05 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:46.488 18:13:05 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:46.488 18:13:05 -- spdk/autotest.sh@48 -- # udevadm_pid=54173 00:03:46.488 18:13:05 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:46.488 18:13:05 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:46.488 18:13:05 -- pm/common@17 -- # local monitor 00:03:46.488 18:13:05 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:46.488 18:13:05 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:46.488 18:13:05 -- pm/common@25 -- # sleep 1 00:03:46.488 18:13:05 -- pm/common@21 -- # date +%s 00:03:46.488 18:13:05 -- pm/common@21 -- # date +%s 00:03:46.748 18:13:05 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732126385 00:03:46.748 18:13:05 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732126385 00:03:46.748 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732126385_collect-vmstat.pm.log 00:03:46.748 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732126385_collect-cpu-load.pm.log 00:03:47.687 18:13:06 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:47.687 18:13:06 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:47.687 18:13:06 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:47.687 18:13:06 -- common/autotest_common.sh@10 -- # set +x 00:03:47.687 18:13:06 -- spdk/autotest.sh@59 -- # create_test_list 00:03:47.687 18:13:06 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:47.687 18:13:06 -- common/autotest_common.sh@10 -- # set +x 00:03:47.687 18:13:06 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:47.687 18:13:06 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:47.687 18:13:06 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:47.687 18:13:06 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:47.687 18:13:06 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:47.687 18:13:06 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:47.687 18:13:06 -- common/autotest_common.sh@1457 -- # uname 00:03:47.687 18:13:06 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:47.687 18:13:06 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:47.687 18:13:06 -- common/autotest_common.sh@1477 -- # uname 00:03:47.687 18:13:06 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:47.687 18:13:06 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:47.687 18:13:06 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:47.687 lcov: LCOV version 1.15 00:03:47.687 18:13:06 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:02.594 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:02.594 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:20.722 18:13:36 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:20.722 18:13:36 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:20.722 18:13:36 -- common/autotest_common.sh@10 -- # set +x 00:04:20.722 18:13:36 -- spdk/autotest.sh@78 -- # rm -f 00:04:20.722 18:13:36 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:20.722 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:20.722 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:20.722 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:20.722 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:04:20.722 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:04:20.722 18:13:37 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:20.722 18:13:37 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:20.722 18:13:37 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:20.722 18:13:37 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:04:20.722 18:13:37 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:20.722 18:13:37 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:04:20.722 18:13:37 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:20.722 18:13:37 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:20.722 18:13:37 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:20.722 18:13:37 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:20.722 18:13:37 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:04:20.722 18:13:37 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:04:20.722 18:13:37 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:20.722 18:13:37 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:20.722 18:13:37 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:20.722 18:13:37 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:04:20.722 18:13:37 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:04:20.722 18:13:37 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:20.722 18:13:37 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:20.722 18:13:37 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:20.722 18:13:37 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:04:20.722 18:13:37 -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:04:20.722 18:13:37 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:04:20.722 18:13:37 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:20.722 18:13:37 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:20.722 18:13:37 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:04:20.722 18:13:37 -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:04:20.722 18:13:37 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:04:20.722 18:13:37 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:20.722 18:13:37 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:20.722 18:13:37 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:04:20.722 18:13:37 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:04:20.722 18:13:37 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:04:20.722 18:13:37 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:20.722 18:13:37 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:20.722 18:13:37 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:04:20.722 18:13:37 -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:04:20.722 18:13:37 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:20.722 18:13:37 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:20.722 18:13:37 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:20.722 18:13:37 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:20.722 18:13:37 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:20.722 18:13:37 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:20.722 18:13:37 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:20.722 18:13:37 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:20.722 No valid GPT data, bailing 00:04:20.722 18:13:37 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:20.722 18:13:37 -- scripts/common.sh@394 -- # pt= 00:04:20.722 18:13:37 -- scripts/common.sh@395 -- # return 1 00:04:20.722 18:13:37 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:20.722 1+0 records in 00:04:20.722 1+0 records out 00:04:20.722 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0240739 s, 43.6 MB/s 00:04:20.722 18:13:37 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:20.722 18:13:37 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:20.722 18:13:37 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:20.722 18:13:37 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:20.722 18:13:37 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:20.722 No valid GPT data, bailing 00:04:20.722 18:13:37 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:20.722 18:13:37 -- scripts/common.sh@394 -- # pt= 00:04:20.722 18:13:37 -- scripts/common.sh@395 -- # return 1 00:04:20.722 18:13:37 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:20.722 1+0 records in 00:04:20.722 1+0 records out 00:04:20.722 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00552843 s, 190 MB/s 00:04:20.722 18:13:37 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:20.722 18:13:37 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:20.722 18:13:37 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:04:20.722 18:13:37 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:04:20.722 18:13:37 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:04:20.722 No valid GPT data, bailing 00:04:20.722 18:13:37 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:20.722 18:13:37 -- scripts/common.sh@394 -- # pt= 00:04:20.722 18:13:37 -- scripts/common.sh@395 -- # return 1 00:04:20.722 18:13:37 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:04:20.722 1+0 records in 00:04:20.722 1+0 records out 00:04:20.722 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00424596 s, 247 MB/s 00:04:20.722 18:13:37 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:20.722 18:13:37 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:20.722 18:13:37 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:04:20.722 18:13:37 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:04:20.722 18:13:37 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:04:20.722 No valid GPT data, bailing 00:04:20.722 18:13:37 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:04:20.722 18:13:37 -- scripts/common.sh@394 -- # pt= 00:04:20.722 18:13:37 -- scripts/common.sh@395 -- # return 1 00:04:20.722 18:13:37 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:04:20.722 1+0 records in 00:04:20.722 1+0 records out 00:04:20.722 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0047688 s, 220 MB/s 00:04:20.722 18:13:37 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:20.722 18:13:37 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:20.722 18:13:37 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:04:20.723 18:13:37 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:04:20.723 18:13:37 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:04:20.723 No valid GPT data, bailing 00:04:20.723 18:13:37 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:04:20.723 18:13:37 -- scripts/common.sh@394 -- # pt= 00:04:20.723 18:13:37 -- scripts/common.sh@395 -- # return 1 00:04:20.723 18:13:37 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:04:20.723 1+0 records in 00:04:20.723 1+0 records out 00:04:20.723 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0058178 s, 180 MB/s 00:04:20.723 18:13:37 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:20.723 18:13:37 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:20.723 18:13:37 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:04:20.723 18:13:37 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:04:20.723 18:13:37 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:04:20.723 No valid GPT data, bailing 00:04:20.723 18:13:37 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:20.723 18:13:37 -- scripts/common.sh@394 -- # pt= 00:04:20.723 18:13:37 -- scripts/common.sh@395 -- # return 1 00:04:20.723 18:13:37 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:04:20.723 1+0 records in 00:04:20.723 1+0 records out 00:04:20.723 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00607451 s, 173 MB/s 00:04:20.723 18:13:37 -- spdk/autotest.sh@105 -- # sync 00:04:20.723 18:13:37 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:20.723 18:13:37 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:20.723 18:13:37 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:21.292 18:13:39 -- spdk/autotest.sh@111 -- # uname -s 00:04:21.292 18:13:39 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:21.292 18:13:39 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:21.292 18:13:39 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:21.553 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:22.133 Hugepages 00:04:22.133 node hugesize free / total 00:04:22.133 node0 1048576kB 0 / 0 00:04:22.133 node0 2048kB 0 / 0 00:04:22.133 00:04:22.133 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:22.133 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:22.133 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:22.133 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:22.429 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:04:22.429 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:04:22.429 18:13:40 -- spdk/autotest.sh@117 -- # uname -s 00:04:22.429 18:13:40 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:22.429 18:13:40 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:22.429 18:13:40 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:22.691 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:23.262 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:23.262 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:23.262 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:23.262 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:23.522 18:13:41 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:24.463 18:13:42 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:24.463 18:13:42 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:24.463 18:13:42 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:24.463 18:13:42 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:24.463 18:13:42 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:24.463 18:13:42 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:24.463 18:13:42 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:24.463 18:13:42 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:24.463 18:13:42 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:24.463 18:13:43 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:04:24.463 18:13:43 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:24.463 18:13:43 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:24.723 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:24.983 Waiting for block devices as requested 00:04:24.983 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:24.983 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:25.244 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:04:25.244 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:04:30.530 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:04:30.530 18:13:48 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:30.530 18:13:48 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:30.530 18:13:48 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:30.530 18:13:48 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:04:30.530 18:13:48 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:30.530 18:13:48 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:30.530 18:13:48 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:30.530 18:13:48 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:30.530 18:13:48 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:30.530 18:13:48 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:30.530 18:13:48 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:30.530 18:13:48 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:30.530 18:13:48 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:30.530 18:13:48 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:30.530 18:13:48 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:30.530 18:13:48 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:30.530 18:13:48 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:30.530 18:13:48 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:30.530 18:13:48 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:30.530 18:13:48 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:30.531 18:13:48 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:30.531 18:13:48 -- common/autotest_common.sh@1543 -- # continue 00:04:30.531 18:13:48 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:30.531 18:13:48 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:30.531 18:13:48 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:04:30.531 18:13:48 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:30.531 18:13:48 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:30.531 18:13:48 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:30.531 18:13:48 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:30.531 18:13:48 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:30.531 18:13:48 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:30.531 18:13:48 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:30.531 18:13:48 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:30.531 18:13:48 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:30.531 18:13:48 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:30.531 18:13:48 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:30.531 18:13:48 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:30.531 18:13:48 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:30.531 18:13:48 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:30.531 18:13:48 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:30.531 18:13:48 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:30.531 18:13:48 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:30.531 18:13:48 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:30.531 18:13:48 -- common/autotest_common.sh@1543 -- # continue 00:04:30.531 18:13:48 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:30.531 18:13:48 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:04:30.531 18:13:48 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:30.531 18:13:48 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:04:30.531 18:13:48 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:30.531 18:13:48 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:04:30.531 18:13:48 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:30.531 18:13:48 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:04:30.531 18:13:48 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:04:30.531 18:13:48 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:04:30.531 18:13:48 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:04:30.531 18:13:48 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:30.531 18:13:48 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:30.531 18:13:48 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:30.531 18:13:48 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:30.531 18:13:48 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:30.531 18:13:48 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:04:30.531 18:13:48 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:30.531 18:13:48 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:30.531 18:13:48 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:30.531 18:13:48 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:30.531 18:13:48 -- common/autotest_common.sh@1543 -- # continue 00:04:30.531 18:13:48 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:30.531 18:13:48 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:04:30.531 18:13:48 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:30.531 18:13:48 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:04:30.531 18:13:48 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:30.531 18:13:48 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:04:30.531 18:13:48 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:30.531 18:13:48 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:04:30.531 18:13:48 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:04:30.531 18:13:48 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:04:30.531 18:13:48 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:04:30.531 18:13:48 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:30.531 18:13:48 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:30.531 18:13:48 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:30.531 18:13:48 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:30.531 18:13:48 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:30.531 18:13:48 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:04:30.531 18:13:48 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:30.531 18:13:48 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:30.531 18:13:48 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:30.531 18:13:48 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:30.531 18:13:48 -- common/autotest_common.sh@1543 -- # continue 00:04:30.531 18:13:48 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:30.531 18:13:48 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:30.531 18:13:48 -- common/autotest_common.sh@10 -- # set +x 00:04:30.531 18:13:48 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:30.531 18:13:48 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:30.531 18:13:48 -- common/autotest_common.sh@10 -- # set +x 00:04:30.531 18:13:48 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:30.791 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:31.362 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:31.362 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:31.362 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:31.623 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:31.623 18:13:50 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:31.623 18:13:50 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:31.623 18:13:50 -- common/autotest_common.sh@10 -- # set +x 00:04:31.623 18:13:50 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:31.623 18:13:50 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:31.623 18:13:50 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:31.623 18:13:50 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:31.623 18:13:50 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:31.623 18:13:50 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:31.623 18:13:50 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:31.623 18:13:50 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:31.623 18:13:50 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:31.623 18:13:50 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:31.623 18:13:50 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:31.623 18:13:50 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:31.623 18:13:50 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:31.623 18:13:50 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:04:31.623 18:13:50 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:31.623 18:13:50 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:31.623 18:13:50 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:31.623 18:13:50 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:31.623 18:13:50 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:31.623 18:13:50 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:31.623 18:13:50 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:31.623 18:13:50 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:31.623 18:13:50 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:31.623 18:13:50 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:31.623 18:13:50 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:04:31.623 18:13:50 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:31.623 18:13:50 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:31.623 18:13:50 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:31.623 18:13:50 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:04:31.623 18:13:50 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:31.623 18:13:50 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:31.623 18:13:50 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:31.623 18:13:50 -- common/autotest_common.sh@1572 -- # return 0 00:04:31.623 18:13:50 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:31.623 18:13:50 -- common/autotest_common.sh@1580 -- # return 0 00:04:31.623 18:13:50 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:31.623 18:13:50 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:31.623 18:13:50 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:31.623 18:13:50 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:31.623 18:13:50 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:31.623 18:13:50 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:31.623 18:13:50 -- common/autotest_common.sh@10 -- # set +x 00:04:31.623 18:13:50 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:31.623 18:13:50 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:31.623 18:13:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:31.623 18:13:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:31.623 18:13:50 -- common/autotest_common.sh@10 -- # set +x 00:04:31.623 ************************************ 00:04:31.623 START TEST env 00:04:31.623 ************************************ 00:04:31.623 18:13:50 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:31.885 * Looking for test storage... 00:04:31.885 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:31.885 18:13:50 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:31.885 18:13:50 env -- common/autotest_common.sh@1693 -- # lcov --version 00:04:31.885 18:13:50 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:31.885 18:13:50 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:31.885 18:13:50 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:31.885 18:13:50 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:31.885 18:13:50 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:31.885 18:13:50 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:31.885 18:13:50 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:31.885 18:13:50 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:31.885 18:13:50 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:31.885 18:13:50 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:31.885 18:13:50 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:31.885 18:13:50 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:31.885 18:13:50 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:31.885 18:13:50 env -- scripts/common.sh@344 -- # case "$op" in 00:04:31.885 18:13:50 env -- scripts/common.sh@345 -- # : 1 00:04:31.885 18:13:50 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:31.885 18:13:50 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:31.885 18:13:50 env -- scripts/common.sh@365 -- # decimal 1 00:04:31.885 18:13:50 env -- scripts/common.sh@353 -- # local d=1 00:04:31.885 18:13:50 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:31.885 18:13:50 env -- scripts/common.sh@355 -- # echo 1 00:04:31.885 18:13:50 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:31.885 18:13:50 env -- scripts/common.sh@366 -- # decimal 2 00:04:31.885 18:13:50 env -- scripts/common.sh@353 -- # local d=2 00:04:31.885 18:13:50 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:31.885 18:13:50 env -- scripts/common.sh@355 -- # echo 2 00:04:31.885 18:13:50 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:31.885 18:13:50 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:31.885 18:13:50 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:31.885 18:13:50 env -- scripts/common.sh@368 -- # return 0 00:04:31.885 18:13:50 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:31.885 18:13:50 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:31.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.885 --rc genhtml_branch_coverage=1 00:04:31.885 --rc genhtml_function_coverage=1 00:04:31.885 --rc genhtml_legend=1 00:04:31.885 --rc geninfo_all_blocks=1 00:04:31.885 --rc geninfo_unexecuted_blocks=1 00:04:31.885 00:04:31.885 ' 00:04:31.885 18:13:50 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:31.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.885 --rc genhtml_branch_coverage=1 00:04:31.885 --rc genhtml_function_coverage=1 00:04:31.885 --rc genhtml_legend=1 00:04:31.885 --rc geninfo_all_blocks=1 00:04:31.885 --rc geninfo_unexecuted_blocks=1 00:04:31.885 00:04:31.885 ' 00:04:31.885 18:13:50 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:31.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.885 --rc genhtml_branch_coverage=1 00:04:31.885 --rc genhtml_function_coverage=1 00:04:31.885 --rc genhtml_legend=1 00:04:31.885 --rc geninfo_all_blocks=1 00:04:31.885 --rc geninfo_unexecuted_blocks=1 00:04:31.885 00:04:31.885 ' 00:04:31.885 18:13:50 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:31.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.885 --rc genhtml_branch_coverage=1 00:04:31.885 --rc genhtml_function_coverage=1 00:04:31.885 --rc genhtml_legend=1 00:04:31.885 --rc geninfo_all_blocks=1 00:04:31.885 --rc geninfo_unexecuted_blocks=1 00:04:31.885 00:04:31.885 ' 00:04:31.885 18:13:50 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:31.885 18:13:50 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:31.885 18:13:50 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:31.885 18:13:50 env -- common/autotest_common.sh@10 -- # set +x 00:04:31.885 ************************************ 00:04:31.885 START TEST env_memory 00:04:31.885 ************************************ 00:04:31.885 18:13:50 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:31.885 00:04:31.885 00:04:31.885 CUnit - A unit testing framework for C - Version 2.1-3 00:04:31.885 http://cunit.sourceforge.net/ 00:04:31.885 00:04:31.885 00:04:31.885 Suite: memory 00:04:31.885 Test: alloc and free memory map ...[2024-11-20 18:13:50.420899] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:31.885 passed 00:04:31.885 Test: mem map translation ...[2024-11-20 18:13:50.459659] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:31.885 [2024-11-20 18:13:50.459766] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:31.885 [2024-11-20 18:13:50.459872] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:31.885 [2024-11-20 18:13:50.459910] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:32.146 passed 00:04:32.146 Test: mem map registration ...[2024-11-20 18:13:50.528819] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:32.146 [2024-11-20 18:13:50.528937] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:32.146 passed 00:04:32.146 Test: mem map adjacent registrations ...passed 00:04:32.146 00:04:32.146 Run Summary: Type Total Ran Passed Failed Inactive 00:04:32.146 suites 1 1 n/a 0 0 00:04:32.146 tests 4 4 4 0 0 00:04:32.146 asserts 152 152 152 0 n/a 00:04:32.146 00:04:32.146 Elapsed time = 0.234 seconds 00:04:32.146 00:04:32.146 real 0m0.267s 00:04:32.146 user 0m0.242s 00:04:32.146 sys 0m0.016s 00:04:32.146 18:13:50 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:32.146 18:13:50 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:32.146 ************************************ 00:04:32.146 END TEST env_memory 00:04:32.146 ************************************ 00:04:32.146 18:13:50 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:32.146 18:13:50 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:32.146 18:13:50 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:32.146 18:13:50 env -- common/autotest_common.sh@10 -- # set +x 00:04:32.146 ************************************ 00:04:32.146 START TEST env_vtophys 00:04:32.146 ************************************ 00:04:32.146 18:13:50 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:32.146 EAL: lib.eal log level changed from notice to debug 00:04:32.146 EAL: Detected lcore 0 as core 0 on socket 0 00:04:32.146 EAL: Detected lcore 1 as core 0 on socket 0 00:04:32.146 EAL: Detected lcore 2 as core 0 on socket 0 00:04:32.146 EAL: Detected lcore 3 as core 0 on socket 0 00:04:32.146 EAL: Detected lcore 4 as core 0 on socket 0 00:04:32.146 EAL: Detected lcore 5 as core 0 on socket 0 00:04:32.146 EAL: Detected lcore 6 as core 0 on socket 0 00:04:32.146 EAL: Detected lcore 7 as core 0 on socket 0 00:04:32.146 EAL: Detected lcore 8 as core 0 on socket 0 00:04:32.146 EAL: Detected lcore 9 as core 0 on socket 0 00:04:32.146 EAL: Maximum logical cores by configuration: 128 00:04:32.146 EAL: Detected CPU lcores: 10 00:04:32.146 EAL: Detected NUMA nodes: 1 00:04:32.146 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:32.146 EAL: Detected shared linkage of DPDK 00:04:32.146 EAL: No shared files mode enabled, IPC will be disabled 00:04:32.146 EAL: Selected IOVA mode 'PA' 00:04:32.146 EAL: Probing VFIO support... 00:04:32.146 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:32.146 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:32.146 EAL: Ask a virtual area of 0x2e000 bytes 00:04:32.146 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:32.146 EAL: Setting up physically contiguous memory... 00:04:32.146 EAL: Setting maximum number of open files to 524288 00:04:32.146 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:32.146 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:32.146 EAL: Ask a virtual area of 0x61000 bytes 00:04:32.146 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:32.146 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:32.146 EAL: Ask a virtual area of 0x400000000 bytes 00:04:32.146 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:32.146 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:32.146 EAL: Ask a virtual area of 0x61000 bytes 00:04:32.146 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:32.147 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:32.147 EAL: Ask a virtual area of 0x400000000 bytes 00:04:32.147 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:32.147 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:32.147 EAL: Ask a virtual area of 0x61000 bytes 00:04:32.147 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:32.147 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:32.147 EAL: Ask a virtual area of 0x400000000 bytes 00:04:32.147 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:32.147 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:32.147 EAL: Ask a virtual area of 0x61000 bytes 00:04:32.147 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:32.147 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:32.147 EAL: Ask a virtual area of 0x400000000 bytes 00:04:32.147 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:32.147 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:32.147 EAL: Hugepages will be freed exactly as allocated. 00:04:32.147 EAL: No shared files mode enabled, IPC is disabled 00:04:32.147 EAL: No shared files mode enabled, IPC is disabled 00:04:32.407 EAL: TSC frequency is ~2600000 KHz 00:04:32.407 EAL: Main lcore 0 is ready (tid=7f0dbbce3a40;cpuset=[0]) 00:04:32.407 EAL: Trying to obtain current memory policy. 00:04:32.407 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.407 EAL: Restoring previous memory policy: 0 00:04:32.407 EAL: request: mp_malloc_sync 00:04:32.407 EAL: No shared files mode enabled, IPC is disabled 00:04:32.407 EAL: Heap on socket 0 was expanded by 2MB 00:04:32.407 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:32.407 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:32.407 EAL: Mem event callback 'spdk:(nil)' registered 00:04:32.407 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:32.407 00:04:32.407 00:04:32.407 CUnit - A unit testing framework for C - Version 2.1-3 00:04:32.407 http://cunit.sourceforge.net/ 00:04:32.407 00:04:32.407 00:04:32.407 Suite: components_suite 00:04:32.667 Test: vtophys_malloc_test ...passed 00:04:32.667 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:32.667 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.667 EAL: Restoring previous memory policy: 4 00:04:32.667 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.667 EAL: request: mp_malloc_sync 00:04:32.667 EAL: No shared files mode enabled, IPC is disabled 00:04:32.667 EAL: Heap on socket 0 was expanded by 4MB 00:04:32.667 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.667 EAL: request: mp_malloc_sync 00:04:32.667 EAL: No shared files mode enabled, IPC is disabled 00:04:32.667 EAL: Heap on socket 0 was shrunk by 4MB 00:04:32.667 EAL: Trying to obtain current memory policy. 00:04:32.667 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.667 EAL: Restoring previous memory policy: 4 00:04:32.667 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.667 EAL: request: mp_malloc_sync 00:04:32.667 EAL: No shared files mode enabled, IPC is disabled 00:04:32.667 EAL: Heap on socket 0 was expanded by 6MB 00:04:32.667 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.667 EAL: request: mp_malloc_sync 00:04:32.667 EAL: No shared files mode enabled, IPC is disabled 00:04:32.667 EAL: Heap on socket 0 was shrunk by 6MB 00:04:32.667 EAL: Trying to obtain current memory policy. 00:04:32.667 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.667 EAL: Restoring previous memory policy: 4 00:04:32.667 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.667 EAL: request: mp_malloc_sync 00:04:32.667 EAL: No shared files mode enabled, IPC is disabled 00:04:32.667 EAL: Heap on socket 0 was expanded by 10MB 00:04:32.667 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.667 EAL: request: mp_malloc_sync 00:04:32.667 EAL: No shared files mode enabled, IPC is disabled 00:04:32.667 EAL: Heap on socket 0 was shrunk by 10MB 00:04:32.667 EAL: Trying to obtain current memory policy. 00:04:32.667 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.667 EAL: Restoring previous memory policy: 4 00:04:32.667 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.667 EAL: request: mp_malloc_sync 00:04:32.667 EAL: No shared files mode enabled, IPC is disabled 00:04:32.667 EAL: Heap on socket 0 was expanded by 18MB 00:04:32.667 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.667 EAL: request: mp_malloc_sync 00:04:32.667 EAL: No shared files mode enabled, IPC is disabled 00:04:32.667 EAL: Heap on socket 0 was shrunk by 18MB 00:04:32.667 EAL: Trying to obtain current memory policy. 00:04:32.667 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.667 EAL: Restoring previous memory policy: 4 00:04:32.667 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.667 EAL: request: mp_malloc_sync 00:04:32.667 EAL: No shared files mode enabled, IPC is disabled 00:04:32.667 EAL: Heap on socket 0 was expanded by 34MB 00:04:32.667 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.667 EAL: request: mp_malloc_sync 00:04:32.667 EAL: No shared files mode enabled, IPC is disabled 00:04:32.667 EAL: Heap on socket 0 was shrunk by 34MB 00:04:32.927 EAL: Trying to obtain current memory policy. 00:04:32.927 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.927 EAL: Restoring previous memory policy: 4 00:04:32.927 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.927 EAL: request: mp_malloc_sync 00:04:32.927 EAL: No shared files mode enabled, IPC is disabled 00:04:32.927 EAL: Heap on socket 0 was expanded by 66MB 00:04:32.927 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.927 EAL: request: mp_malloc_sync 00:04:32.927 EAL: No shared files mode enabled, IPC is disabled 00:04:32.927 EAL: Heap on socket 0 was shrunk by 66MB 00:04:32.927 EAL: Trying to obtain current memory policy. 00:04:32.927 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.927 EAL: Restoring previous memory policy: 4 00:04:32.927 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.927 EAL: request: mp_malloc_sync 00:04:32.927 EAL: No shared files mode enabled, IPC is disabled 00:04:32.927 EAL: Heap on socket 0 was expanded by 130MB 00:04:33.187 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.187 EAL: request: mp_malloc_sync 00:04:33.187 EAL: No shared files mode enabled, IPC is disabled 00:04:33.187 EAL: Heap on socket 0 was shrunk by 130MB 00:04:33.187 EAL: Trying to obtain current memory policy. 00:04:33.187 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:33.447 EAL: Restoring previous memory policy: 4 00:04:33.448 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.448 EAL: request: mp_malloc_sync 00:04:33.448 EAL: No shared files mode enabled, IPC is disabled 00:04:33.448 EAL: Heap on socket 0 was expanded by 258MB 00:04:33.708 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.708 EAL: request: mp_malloc_sync 00:04:33.708 EAL: No shared files mode enabled, IPC is disabled 00:04:33.708 EAL: Heap on socket 0 was shrunk by 258MB 00:04:33.969 EAL: Trying to obtain current memory policy. 00:04:33.969 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:33.969 EAL: Restoring previous memory policy: 4 00:04:33.969 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.969 EAL: request: mp_malloc_sync 00:04:33.969 EAL: No shared files mode enabled, IPC is disabled 00:04:33.969 EAL: Heap on socket 0 was expanded by 514MB 00:04:34.540 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.540 EAL: request: mp_malloc_sync 00:04:34.540 EAL: No shared files mode enabled, IPC is disabled 00:04:34.540 EAL: Heap on socket 0 was shrunk by 514MB 00:04:35.112 EAL: Trying to obtain current memory policy. 00:04:35.112 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:35.112 EAL: Restoring previous memory policy: 4 00:04:35.112 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.112 EAL: request: mp_malloc_sync 00:04:35.112 EAL: No shared files mode enabled, IPC is disabled 00:04:35.112 EAL: Heap on socket 0 was expanded by 1026MB 00:04:36.495 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.495 EAL: request: mp_malloc_sync 00:04:36.495 EAL: No shared files mode enabled, IPC is disabled 00:04:36.495 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:37.432 passed 00:04:37.432 00:04:37.432 Run Summary: Type Total Ran Passed Failed Inactive 00:04:37.432 suites 1 1 n/a 0 0 00:04:37.432 tests 2 2 2 0 0 00:04:37.432 asserts 5880 5880 5880 0 n/a 00:04:37.432 00:04:37.432 Elapsed time = 4.871 seconds 00:04:37.432 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.432 EAL: request: mp_malloc_sync 00:04:37.432 EAL: No shared files mode enabled, IPC is disabled 00:04:37.432 EAL: Heap on socket 0 was shrunk by 2MB 00:04:37.432 EAL: No shared files mode enabled, IPC is disabled 00:04:37.432 EAL: No shared files mode enabled, IPC is disabled 00:04:37.432 EAL: No shared files mode enabled, IPC is disabled 00:04:37.432 00:04:37.432 real 0m5.140s 00:04:37.432 user 0m4.329s 00:04:37.432 sys 0m0.664s 00:04:37.432 ************************************ 00:04:37.432 END TEST env_vtophys 00:04:37.432 ************************************ 00:04:37.432 18:13:55 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:37.432 18:13:55 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:37.432 18:13:55 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:37.432 18:13:55 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:37.432 18:13:55 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:37.432 18:13:55 env -- common/autotest_common.sh@10 -- # set +x 00:04:37.432 ************************************ 00:04:37.432 START TEST env_pci 00:04:37.432 ************************************ 00:04:37.432 18:13:55 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:37.432 00:04:37.432 00:04:37.432 CUnit - A unit testing framework for C - Version 2.1-3 00:04:37.433 http://cunit.sourceforge.net/ 00:04:37.433 00:04:37.433 00:04:37.433 Suite: pci 00:04:37.433 Test: pci_hook ...[2024-11-20 18:13:55.919673] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56973 has claimed it 00:04:37.433 EAL: Cannot find device (10000:00:01.0) 00:04:37.433 passed 00:04:37.433 00:04:37.433 Run Summary: Type Total Ran Passed Failed Inactive 00:04:37.433 suites 1 1 n/a 0 0 00:04:37.433 tests 1 1 1 0 0 00:04:37.433 asserts 25 25 25 0 n/a 00:04:37.433 00:04:37.433 Elapsed time = 0.005 seconds 00:04:37.433 EAL: Failed to attach device on primary process 00:04:37.433 00:04:37.433 real 0m0.061s 00:04:37.433 user 0m0.031s 00:04:37.433 sys 0m0.029s 00:04:37.433 18:13:55 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:37.433 18:13:55 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:37.433 ************************************ 00:04:37.433 END TEST env_pci 00:04:37.433 ************************************ 00:04:37.433 18:13:55 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:37.433 18:13:56 env -- env/env.sh@15 -- # uname 00:04:37.433 18:13:56 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:37.433 18:13:56 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:37.433 18:13:56 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:37.433 18:13:56 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:37.433 18:13:56 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:37.433 18:13:56 env -- common/autotest_common.sh@10 -- # set +x 00:04:37.433 ************************************ 00:04:37.433 START TEST env_dpdk_post_init 00:04:37.433 ************************************ 00:04:37.433 18:13:56 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:37.694 EAL: Detected CPU lcores: 10 00:04:37.694 EAL: Detected NUMA nodes: 1 00:04:37.694 EAL: Detected shared linkage of DPDK 00:04:37.694 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:37.694 EAL: Selected IOVA mode 'PA' 00:04:37.694 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:37.694 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:37.694 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:37.694 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:04:37.694 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:04:37.694 Starting DPDK initialization... 00:04:37.694 Starting SPDK post initialization... 00:04:37.694 SPDK NVMe probe 00:04:37.694 Attaching to 0000:00:10.0 00:04:37.694 Attaching to 0000:00:11.0 00:04:37.694 Attaching to 0000:00:12.0 00:04:37.694 Attaching to 0000:00:13.0 00:04:37.694 Attached to 0000:00:13.0 00:04:37.694 Attached to 0000:00:10.0 00:04:37.694 Attached to 0000:00:11.0 00:04:37.694 Attached to 0000:00:12.0 00:04:37.694 Cleaning up... 00:04:37.694 00:04:37.694 real 0m0.238s 00:04:37.694 user 0m0.078s 00:04:37.694 sys 0m0.061s 00:04:37.694 18:13:56 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:37.694 ************************************ 00:04:37.694 END TEST env_dpdk_post_init 00:04:37.694 ************************************ 00:04:37.694 18:13:56 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:37.694 18:13:56 env -- env/env.sh@26 -- # uname 00:04:37.694 18:13:56 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:37.694 18:13:56 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:37.694 18:13:56 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:37.694 18:13:56 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:37.694 18:13:56 env -- common/autotest_common.sh@10 -- # set +x 00:04:37.694 ************************************ 00:04:37.694 START TEST env_mem_callbacks 00:04:37.694 ************************************ 00:04:37.694 18:13:56 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:37.955 EAL: Detected CPU lcores: 10 00:04:37.955 EAL: Detected NUMA nodes: 1 00:04:37.955 EAL: Detected shared linkage of DPDK 00:04:37.955 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:37.955 EAL: Selected IOVA mode 'PA' 00:04:37.955 00:04:37.955 00:04:37.955 CUnit - A unit testing framework for C - Version 2.1-3 00:04:37.955 http://cunit.sourceforge.net/ 00:04:37.955 00:04:37.955 00:04:37.955 Suite: memory 00:04:37.955 Test: test ... 00:04:37.955 register 0x200000200000 2097152 00:04:37.955 malloc 3145728 00:04:37.955 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:37.955 register 0x200000400000 4194304 00:04:37.955 buf 0x2000004fffc0 len 3145728 PASSED 00:04:37.955 malloc 64 00:04:37.955 buf 0x2000004ffec0 len 64 PASSED 00:04:37.955 malloc 4194304 00:04:37.955 register 0x200000800000 6291456 00:04:37.955 buf 0x2000009fffc0 len 4194304 PASSED 00:04:37.955 free 0x2000004fffc0 3145728 00:04:37.955 free 0x2000004ffec0 64 00:04:37.955 unregister 0x200000400000 4194304 PASSED 00:04:37.955 free 0x2000009fffc0 4194304 00:04:37.955 unregister 0x200000800000 6291456 PASSED 00:04:37.955 malloc 8388608 00:04:37.955 register 0x200000400000 10485760 00:04:37.955 buf 0x2000005fffc0 len 8388608 PASSED 00:04:37.955 free 0x2000005fffc0 8388608 00:04:37.955 unregister 0x200000400000 10485760 PASSED 00:04:37.955 passed 00:04:37.955 00:04:37.955 Run Summary: Type Total Ran Passed Failed Inactive 00:04:37.955 suites 1 1 n/a 0 0 00:04:37.955 tests 1 1 1 0 0 00:04:37.955 asserts 15 15 15 0 n/a 00:04:37.955 00:04:37.955 Elapsed time = 0.041 seconds 00:04:37.955 00:04:37.955 real 0m0.195s 00:04:37.955 user 0m0.060s 00:04:37.955 sys 0m0.033s 00:04:37.955 18:13:56 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:37.955 18:13:56 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:37.955 ************************************ 00:04:37.955 END TEST env_mem_callbacks 00:04:37.955 ************************************ 00:04:37.955 00:04:37.955 real 0m6.360s 00:04:37.955 user 0m4.896s 00:04:37.955 sys 0m1.015s 00:04:37.955 18:13:56 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:37.955 18:13:56 env -- common/autotest_common.sh@10 -- # set +x 00:04:37.955 ************************************ 00:04:37.955 END TEST env 00:04:37.955 ************************************ 00:04:38.215 18:13:56 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:38.215 18:13:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:38.215 18:13:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:38.215 18:13:56 -- common/autotest_common.sh@10 -- # set +x 00:04:38.215 ************************************ 00:04:38.215 START TEST rpc 00:04:38.215 ************************************ 00:04:38.215 18:13:56 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:38.215 * Looking for test storage... 00:04:38.215 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:38.215 18:13:56 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:38.215 18:13:56 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:38.215 18:13:56 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:38.215 18:13:56 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:38.215 18:13:56 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:38.215 18:13:56 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:38.215 18:13:56 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:38.215 18:13:56 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:38.215 18:13:56 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:38.215 18:13:56 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:38.215 18:13:56 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:38.215 18:13:56 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:38.215 18:13:56 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:38.215 18:13:56 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:38.215 18:13:56 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:38.215 18:13:56 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:38.215 18:13:56 rpc -- scripts/common.sh@345 -- # : 1 00:04:38.215 18:13:56 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:38.215 18:13:56 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:38.215 18:13:56 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:38.215 18:13:56 rpc -- scripts/common.sh@353 -- # local d=1 00:04:38.215 18:13:56 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:38.215 18:13:56 rpc -- scripts/common.sh@355 -- # echo 1 00:04:38.215 18:13:56 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:38.215 18:13:56 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:38.215 18:13:56 rpc -- scripts/common.sh@353 -- # local d=2 00:04:38.215 18:13:56 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:38.215 18:13:56 rpc -- scripts/common.sh@355 -- # echo 2 00:04:38.215 18:13:56 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:38.215 18:13:56 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:38.215 18:13:56 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:38.215 18:13:56 rpc -- scripts/common.sh@368 -- # return 0 00:04:38.215 18:13:56 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:38.215 18:13:56 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:38.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.215 --rc genhtml_branch_coverage=1 00:04:38.215 --rc genhtml_function_coverage=1 00:04:38.215 --rc genhtml_legend=1 00:04:38.215 --rc geninfo_all_blocks=1 00:04:38.215 --rc geninfo_unexecuted_blocks=1 00:04:38.215 00:04:38.215 ' 00:04:38.215 18:13:56 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:38.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.215 --rc genhtml_branch_coverage=1 00:04:38.215 --rc genhtml_function_coverage=1 00:04:38.215 --rc genhtml_legend=1 00:04:38.215 --rc geninfo_all_blocks=1 00:04:38.215 --rc geninfo_unexecuted_blocks=1 00:04:38.215 00:04:38.215 ' 00:04:38.215 18:13:56 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:38.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.215 --rc genhtml_branch_coverage=1 00:04:38.215 --rc genhtml_function_coverage=1 00:04:38.215 --rc genhtml_legend=1 00:04:38.215 --rc geninfo_all_blocks=1 00:04:38.215 --rc geninfo_unexecuted_blocks=1 00:04:38.215 00:04:38.215 ' 00:04:38.215 18:13:56 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:38.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.215 --rc genhtml_branch_coverage=1 00:04:38.215 --rc genhtml_function_coverage=1 00:04:38.215 --rc genhtml_legend=1 00:04:38.215 --rc geninfo_all_blocks=1 00:04:38.215 --rc geninfo_unexecuted_blocks=1 00:04:38.215 00:04:38.215 ' 00:04:38.215 18:13:56 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57100 00:04:38.215 18:13:56 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:38.215 18:13:56 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57100 00:04:38.215 18:13:56 rpc -- common/autotest_common.sh@835 -- # '[' -z 57100 ']' 00:04:38.215 18:13:56 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:38.215 18:13:56 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:38.215 18:13:56 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:38.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:38.215 18:13:56 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:38.215 18:13:56 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:38.215 18:13:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.474 [2024-11-20 18:13:56.853369] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:04:38.474 [2024-11-20 18:13:56.853519] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57100 ] 00:04:38.474 [2024-11-20 18:13:57.010522] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.474 [2024-11-20 18:13:57.094624] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:38.474 [2024-11-20 18:13:57.094668] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57100' to capture a snapshot of events at runtime. 00:04:38.474 [2024-11-20 18:13:57.094675] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:38.474 [2024-11-20 18:13:57.094683] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:38.474 [2024-11-20 18:13:57.094689] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57100 for offline analysis/debug. 00:04:38.474 [2024-11-20 18:13:57.095390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.413 18:13:57 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:39.414 18:13:57 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:39.414 18:13:57 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:39.414 18:13:57 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:39.414 18:13:57 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:39.414 18:13:57 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:39.414 18:13:57 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:39.414 18:13:57 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:39.414 18:13:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.414 ************************************ 00:04:39.414 START TEST rpc_integrity 00:04:39.414 ************************************ 00:04:39.414 18:13:57 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:39.414 18:13:57 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:39.414 18:13:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.414 18:13:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.414 18:13:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.414 18:13:57 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:39.414 18:13:57 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:39.414 18:13:57 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:39.414 18:13:57 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:39.414 18:13:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.414 18:13:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.414 18:13:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.414 18:13:57 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:39.414 18:13:57 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:39.414 18:13:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.414 18:13:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.414 18:13:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.414 18:13:57 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:39.414 { 00:04:39.414 "name": "Malloc0", 00:04:39.414 "aliases": [ 00:04:39.414 "195a08bc-2efd-4dfc-9a2b-c0168f044651" 00:04:39.414 ], 00:04:39.414 "product_name": "Malloc disk", 00:04:39.414 "block_size": 512, 00:04:39.414 "num_blocks": 16384, 00:04:39.414 "uuid": "195a08bc-2efd-4dfc-9a2b-c0168f044651", 00:04:39.414 "assigned_rate_limits": { 00:04:39.414 "rw_ios_per_sec": 0, 00:04:39.414 "rw_mbytes_per_sec": 0, 00:04:39.414 "r_mbytes_per_sec": 0, 00:04:39.414 "w_mbytes_per_sec": 0 00:04:39.414 }, 00:04:39.414 "claimed": false, 00:04:39.414 "zoned": false, 00:04:39.414 "supported_io_types": { 00:04:39.414 "read": true, 00:04:39.414 "write": true, 00:04:39.414 "unmap": true, 00:04:39.414 "flush": true, 00:04:39.414 "reset": true, 00:04:39.414 "nvme_admin": false, 00:04:39.414 "nvme_io": false, 00:04:39.414 "nvme_io_md": false, 00:04:39.414 "write_zeroes": true, 00:04:39.414 "zcopy": true, 00:04:39.414 "get_zone_info": false, 00:04:39.414 "zone_management": false, 00:04:39.414 "zone_append": false, 00:04:39.414 "compare": false, 00:04:39.414 "compare_and_write": false, 00:04:39.414 "abort": true, 00:04:39.414 "seek_hole": false, 00:04:39.414 "seek_data": false, 00:04:39.414 "copy": true, 00:04:39.414 "nvme_iov_md": false 00:04:39.414 }, 00:04:39.414 "memory_domains": [ 00:04:39.414 { 00:04:39.414 "dma_device_id": "system", 00:04:39.414 "dma_device_type": 1 00:04:39.414 }, 00:04:39.414 { 00:04:39.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:39.414 "dma_device_type": 2 00:04:39.414 } 00:04:39.414 ], 00:04:39.414 "driver_specific": {} 00:04:39.414 } 00:04:39.414 ]' 00:04:39.414 18:13:57 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:39.414 18:13:57 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:39.414 18:13:57 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:39.414 18:13:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.414 18:13:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.414 [2024-11-20 18:13:57.823046] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:39.414 [2024-11-20 18:13:57.823138] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:39.414 [2024-11-20 18:13:57.823167] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:04:39.414 [2024-11-20 18:13:57.823180] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:39.414 [2024-11-20 18:13:57.825640] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:39.414 [2024-11-20 18:13:57.825701] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:39.414 Passthru0 00:04:39.414 18:13:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.414 18:13:57 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:39.414 18:13:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.414 18:13:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.414 18:13:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.414 18:13:57 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:39.414 { 00:04:39.414 "name": "Malloc0", 00:04:39.414 "aliases": [ 00:04:39.414 "195a08bc-2efd-4dfc-9a2b-c0168f044651" 00:04:39.414 ], 00:04:39.414 "product_name": "Malloc disk", 00:04:39.414 "block_size": 512, 00:04:39.414 "num_blocks": 16384, 00:04:39.414 "uuid": "195a08bc-2efd-4dfc-9a2b-c0168f044651", 00:04:39.414 "assigned_rate_limits": { 00:04:39.414 "rw_ios_per_sec": 0, 00:04:39.414 "rw_mbytes_per_sec": 0, 00:04:39.414 "r_mbytes_per_sec": 0, 00:04:39.414 "w_mbytes_per_sec": 0 00:04:39.414 }, 00:04:39.414 "claimed": true, 00:04:39.414 "claim_type": "exclusive_write", 00:04:39.414 "zoned": false, 00:04:39.414 "supported_io_types": { 00:04:39.414 "read": true, 00:04:39.414 "write": true, 00:04:39.414 "unmap": true, 00:04:39.414 "flush": true, 00:04:39.414 "reset": true, 00:04:39.414 "nvme_admin": false, 00:04:39.414 "nvme_io": false, 00:04:39.414 "nvme_io_md": false, 00:04:39.414 "write_zeroes": true, 00:04:39.414 "zcopy": true, 00:04:39.414 "get_zone_info": false, 00:04:39.414 "zone_management": false, 00:04:39.414 "zone_append": false, 00:04:39.414 "compare": false, 00:04:39.414 "compare_and_write": false, 00:04:39.414 "abort": true, 00:04:39.414 "seek_hole": false, 00:04:39.414 "seek_data": false, 00:04:39.414 "copy": true, 00:04:39.414 "nvme_iov_md": false 00:04:39.414 }, 00:04:39.414 "memory_domains": [ 00:04:39.414 { 00:04:39.414 "dma_device_id": "system", 00:04:39.414 "dma_device_type": 1 00:04:39.414 }, 00:04:39.414 { 00:04:39.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:39.414 "dma_device_type": 2 00:04:39.414 } 00:04:39.414 ], 00:04:39.414 "driver_specific": {} 00:04:39.414 }, 00:04:39.414 { 00:04:39.414 "name": "Passthru0", 00:04:39.414 "aliases": [ 00:04:39.414 "985d70bd-9af9-5c98-99b0-9f8994a7a60b" 00:04:39.414 ], 00:04:39.414 "product_name": "passthru", 00:04:39.414 "block_size": 512, 00:04:39.414 "num_blocks": 16384, 00:04:39.414 "uuid": "985d70bd-9af9-5c98-99b0-9f8994a7a60b", 00:04:39.414 "assigned_rate_limits": { 00:04:39.414 "rw_ios_per_sec": 0, 00:04:39.415 "rw_mbytes_per_sec": 0, 00:04:39.415 "r_mbytes_per_sec": 0, 00:04:39.415 "w_mbytes_per_sec": 0 00:04:39.415 }, 00:04:39.415 "claimed": false, 00:04:39.415 "zoned": false, 00:04:39.415 "supported_io_types": { 00:04:39.415 "read": true, 00:04:39.415 "write": true, 00:04:39.415 "unmap": true, 00:04:39.415 "flush": true, 00:04:39.415 "reset": true, 00:04:39.415 "nvme_admin": false, 00:04:39.415 "nvme_io": false, 00:04:39.415 "nvme_io_md": false, 00:04:39.415 "write_zeroes": true, 00:04:39.415 "zcopy": true, 00:04:39.415 "get_zone_info": false, 00:04:39.415 "zone_management": false, 00:04:39.415 "zone_append": false, 00:04:39.415 "compare": false, 00:04:39.415 "compare_and_write": false, 00:04:39.415 "abort": true, 00:04:39.415 "seek_hole": false, 00:04:39.415 "seek_data": false, 00:04:39.415 "copy": true, 00:04:39.415 "nvme_iov_md": false 00:04:39.415 }, 00:04:39.415 "memory_domains": [ 00:04:39.415 { 00:04:39.415 "dma_device_id": "system", 00:04:39.415 "dma_device_type": 1 00:04:39.415 }, 00:04:39.415 { 00:04:39.415 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:39.415 "dma_device_type": 2 00:04:39.415 } 00:04:39.415 ], 00:04:39.415 "driver_specific": { 00:04:39.415 "passthru": { 00:04:39.415 "name": "Passthru0", 00:04:39.415 "base_bdev_name": "Malloc0" 00:04:39.415 } 00:04:39.415 } 00:04:39.415 } 00:04:39.415 ]' 00:04:39.415 18:13:57 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:39.415 18:13:57 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:39.415 18:13:57 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:39.415 18:13:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.415 18:13:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.415 18:13:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.415 18:13:57 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:39.415 18:13:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.415 18:13:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.415 18:13:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.415 18:13:57 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:39.415 18:13:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.415 18:13:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.415 18:13:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.415 18:13:57 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:39.415 18:13:57 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:39.415 18:13:57 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:39.415 00:04:39.415 real 0m0.248s 00:04:39.415 user 0m0.128s 00:04:39.415 sys 0m0.031s 00:04:39.415 18:13:57 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:39.415 ************************************ 00:04:39.415 END TEST rpc_integrity 00:04:39.415 ************************************ 00:04:39.415 18:13:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.415 18:13:58 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:39.415 18:13:58 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:39.415 18:13:58 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:39.415 18:13:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.415 ************************************ 00:04:39.415 START TEST rpc_plugins 00:04:39.415 ************************************ 00:04:39.415 18:13:58 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:39.415 18:13:58 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:39.415 18:13:58 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.415 18:13:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:39.415 18:13:58 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.415 18:13:58 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:39.415 18:13:58 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:39.415 18:13:58 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.415 18:13:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:39.676 18:13:58 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.676 18:13:58 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:39.676 { 00:04:39.676 "name": "Malloc1", 00:04:39.677 "aliases": [ 00:04:39.677 "4c00c2ef-8cd2-4179-ad6e-e2a6d1f37767" 00:04:39.677 ], 00:04:39.677 "product_name": "Malloc disk", 00:04:39.677 "block_size": 4096, 00:04:39.677 "num_blocks": 256, 00:04:39.677 "uuid": "4c00c2ef-8cd2-4179-ad6e-e2a6d1f37767", 00:04:39.677 "assigned_rate_limits": { 00:04:39.677 "rw_ios_per_sec": 0, 00:04:39.677 "rw_mbytes_per_sec": 0, 00:04:39.677 "r_mbytes_per_sec": 0, 00:04:39.677 "w_mbytes_per_sec": 0 00:04:39.677 }, 00:04:39.677 "claimed": false, 00:04:39.677 "zoned": false, 00:04:39.677 "supported_io_types": { 00:04:39.677 "read": true, 00:04:39.677 "write": true, 00:04:39.677 "unmap": true, 00:04:39.677 "flush": true, 00:04:39.677 "reset": true, 00:04:39.677 "nvme_admin": false, 00:04:39.677 "nvme_io": false, 00:04:39.677 "nvme_io_md": false, 00:04:39.677 "write_zeroes": true, 00:04:39.677 "zcopy": true, 00:04:39.677 "get_zone_info": false, 00:04:39.677 "zone_management": false, 00:04:39.677 "zone_append": false, 00:04:39.677 "compare": false, 00:04:39.677 "compare_and_write": false, 00:04:39.677 "abort": true, 00:04:39.677 "seek_hole": false, 00:04:39.677 "seek_data": false, 00:04:39.677 "copy": true, 00:04:39.677 "nvme_iov_md": false 00:04:39.677 }, 00:04:39.677 "memory_domains": [ 00:04:39.677 { 00:04:39.677 "dma_device_id": "system", 00:04:39.677 "dma_device_type": 1 00:04:39.677 }, 00:04:39.677 { 00:04:39.677 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:39.677 "dma_device_type": 2 00:04:39.677 } 00:04:39.677 ], 00:04:39.677 "driver_specific": {} 00:04:39.677 } 00:04:39.677 ]' 00:04:39.677 18:13:58 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:39.677 18:13:58 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:39.677 18:13:58 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:39.677 18:13:58 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.677 18:13:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:39.677 18:13:58 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.677 18:13:58 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:39.677 18:13:58 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.677 18:13:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:39.677 18:13:58 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.677 18:13:58 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:39.677 18:13:58 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:39.677 18:13:58 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:39.677 00:04:39.677 real 0m0.117s 00:04:39.677 user 0m0.062s 00:04:39.677 sys 0m0.017s 00:04:39.677 18:13:58 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:39.677 18:13:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:39.677 ************************************ 00:04:39.677 END TEST rpc_plugins 00:04:39.677 ************************************ 00:04:39.677 18:13:58 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:39.677 18:13:58 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:39.677 18:13:58 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:39.677 18:13:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.677 ************************************ 00:04:39.677 START TEST rpc_trace_cmd_test 00:04:39.677 ************************************ 00:04:39.677 18:13:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:39.677 18:13:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:39.677 18:13:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:39.677 18:13:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.677 18:13:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:39.677 18:13:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.677 18:13:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:39.677 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57100", 00:04:39.677 "tpoint_group_mask": "0x8", 00:04:39.677 "iscsi_conn": { 00:04:39.677 "mask": "0x2", 00:04:39.677 "tpoint_mask": "0x0" 00:04:39.677 }, 00:04:39.677 "scsi": { 00:04:39.677 "mask": "0x4", 00:04:39.677 "tpoint_mask": "0x0" 00:04:39.677 }, 00:04:39.677 "bdev": { 00:04:39.677 "mask": "0x8", 00:04:39.677 "tpoint_mask": "0xffffffffffffffff" 00:04:39.677 }, 00:04:39.677 "nvmf_rdma": { 00:04:39.677 "mask": "0x10", 00:04:39.677 "tpoint_mask": "0x0" 00:04:39.677 }, 00:04:39.677 "nvmf_tcp": { 00:04:39.677 "mask": "0x20", 00:04:39.677 "tpoint_mask": "0x0" 00:04:39.677 }, 00:04:39.677 "ftl": { 00:04:39.677 "mask": "0x40", 00:04:39.677 "tpoint_mask": "0x0" 00:04:39.677 }, 00:04:39.677 "blobfs": { 00:04:39.677 "mask": "0x80", 00:04:39.677 "tpoint_mask": "0x0" 00:04:39.677 }, 00:04:39.677 "dsa": { 00:04:39.677 "mask": "0x200", 00:04:39.677 "tpoint_mask": "0x0" 00:04:39.677 }, 00:04:39.677 "thread": { 00:04:39.677 "mask": "0x400", 00:04:39.677 "tpoint_mask": "0x0" 00:04:39.677 }, 00:04:39.677 "nvme_pcie": { 00:04:39.677 "mask": "0x800", 00:04:39.677 "tpoint_mask": "0x0" 00:04:39.677 }, 00:04:39.677 "iaa": { 00:04:39.677 "mask": "0x1000", 00:04:39.677 "tpoint_mask": "0x0" 00:04:39.677 }, 00:04:39.677 "nvme_tcp": { 00:04:39.677 "mask": "0x2000", 00:04:39.677 "tpoint_mask": "0x0" 00:04:39.677 }, 00:04:39.677 "bdev_nvme": { 00:04:39.677 "mask": "0x4000", 00:04:39.677 "tpoint_mask": "0x0" 00:04:39.677 }, 00:04:39.677 "sock": { 00:04:39.677 "mask": "0x8000", 00:04:39.677 "tpoint_mask": "0x0" 00:04:39.677 }, 00:04:39.677 "blob": { 00:04:39.677 "mask": "0x10000", 00:04:39.677 "tpoint_mask": "0x0" 00:04:39.677 }, 00:04:39.677 "bdev_raid": { 00:04:39.677 "mask": "0x20000", 00:04:39.677 "tpoint_mask": "0x0" 00:04:39.677 }, 00:04:39.677 "scheduler": { 00:04:39.677 "mask": "0x40000", 00:04:39.677 "tpoint_mask": "0x0" 00:04:39.677 } 00:04:39.677 }' 00:04:39.677 18:13:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:39.677 18:13:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:39.677 18:13:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:39.677 18:13:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:39.677 18:13:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:39.677 18:13:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:39.939 18:13:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:39.939 18:13:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:39.939 18:13:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:39.939 18:13:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:39.939 00:04:39.939 real 0m0.174s 00:04:39.939 user 0m0.138s 00:04:39.939 sys 0m0.025s 00:04:39.939 18:13:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:39.939 ************************************ 00:04:39.939 END TEST rpc_trace_cmd_test 00:04:39.939 18:13:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:39.939 ************************************ 00:04:39.939 18:13:58 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:39.939 18:13:58 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:39.939 18:13:58 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:39.939 18:13:58 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:39.939 18:13:58 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:39.939 18:13:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.939 ************************************ 00:04:39.939 START TEST rpc_daemon_integrity 00:04:39.939 ************************************ 00:04:39.939 18:13:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:39.939 18:13:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:39.939 18:13:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.939 18:13:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.939 18:13:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.939 18:13:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:39.939 18:13:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:39.939 18:13:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:39.939 18:13:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:39.939 18:13:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.939 18:13:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.939 18:13:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.939 18:13:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:39.939 18:13:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:39.939 18:13:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.939 18:13:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.939 18:13:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.939 18:13:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:39.939 { 00:04:39.939 "name": "Malloc2", 00:04:39.939 "aliases": [ 00:04:39.939 "01e4a9b2-fc90-4d52-b90f-8889cb7515d5" 00:04:39.939 ], 00:04:39.939 "product_name": "Malloc disk", 00:04:39.939 "block_size": 512, 00:04:39.939 "num_blocks": 16384, 00:04:39.939 "uuid": "01e4a9b2-fc90-4d52-b90f-8889cb7515d5", 00:04:39.939 "assigned_rate_limits": { 00:04:39.939 "rw_ios_per_sec": 0, 00:04:39.939 "rw_mbytes_per_sec": 0, 00:04:39.939 "r_mbytes_per_sec": 0, 00:04:39.939 "w_mbytes_per_sec": 0 00:04:39.939 }, 00:04:39.939 "claimed": false, 00:04:39.939 "zoned": false, 00:04:39.939 "supported_io_types": { 00:04:39.939 "read": true, 00:04:39.939 "write": true, 00:04:39.939 "unmap": true, 00:04:39.939 "flush": true, 00:04:39.939 "reset": true, 00:04:39.939 "nvme_admin": false, 00:04:39.939 "nvme_io": false, 00:04:39.939 "nvme_io_md": false, 00:04:39.939 "write_zeroes": true, 00:04:39.939 "zcopy": true, 00:04:39.939 "get_zone_info": false, 00:04:39.939 "zone_management": false, 00:04:39.939 "zone_append": false, 00:04:39.939 "compare": false, 00:04:39.939 "compare_and_write": false, 00:04:39.939 "abort": true, 00:04:39.939 "seek_hole": false, 00:04:39.939 "seek_data": false, 00:04:39.939 "copy": true, 00:04:39.939 "nvme_iov_md": false 00:04:39.939 }, 00:04:39.939 "memory_domains": [ 00:04:39.939 { 00:04:39.939 "dma_device_id": "system", 00:04:39.939 "dma_device_type": 1 00:04:39.939 }, 00:04:39.939 { 00:04:39.939 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:39.939 "dma_device_type": 2 00:04:39.939 } 00:04:39.939 ], 00:04:39.939 "driver_specific": {} 00:04:39.939 } 00:04:39.939 ]' 00:04:39.939 18:13:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:39.939 18:13:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:39.939 18:13:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:39.939 18:13:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.939 18:13:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.939 [2024-11-20 18:13:58.516030] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:39.939 [2024-11-20 18:13:58.516256] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:39.939 [2024-11-20 18:13:58.516287] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:04:39.940 [2024-11-20 18:13:58.516300] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:39.940 [2024-11-20 18:13:58.518685] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:39.940 [2024-11-20 18:13:58.518740] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:39.940 Passthru0 00:04:39.940 18:13:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.940 18:13:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:39.940 18:13:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.940 18:13:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.940 18:13:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.940 18:13:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:39.940 { 00:04:39.940 "name": "Malloc2", 00:04:39.940 "aliases": [ 00:04:39.940 "01e4a9b2-fc90-4d52-b90f-8889cb7515d5" 00:04:39.940 ], 00:04:39.940 "product_name": "Malloc disk", 00:04:39.940 "block_size": 512, 00:04:39.940 "num_blocks": 16384, 00:04:39.940 "uuid": "01e4a9b2-fc90-4d52-b90f-8889cb7515d5", 00:04:39.940 "assigned_rate_limits": { 00:04:39.940 "rw_ios_per_sec": 0, 00:04:39.940 "rw_mbytes_per_sec": 0, 00:04:39.940 "r_mbytes_per_sec": 0, 00:04:39.940 "w_mbytes_per_sec": 0 00:04:39.940 }, 00:04:39.940 "claimed": true, 00:04:39.940 "claim_type": "exclusive_write", 00:04:39.940 "zoned": false, 00:04:39.940 "supported_io_types": { 00:04:39.940 "read": true, 00:04:39.940 "write": true, 00:04:39.940 "unmap": true, 00:04:39.940 "flush": true, 00:04:39.940 "reset": true, 00:04:39.940 "nvme_admin": false, 00:04:39.940 "nvme_io": false, 00:04:39.940 "nvme_io_md": false, 00:04:39.940 "write_zeroes": true, 00:04:39.940 "zcopy": true, 00:04:39.940 "get_zone_info": false, 00:04:39.940 "zone_management": false, 00:04:39.940 "zone_append": false, 00:04:39.940 "compare": false, 00:04:39.940 "compare_and_write": false, 00:04:39.940 "abort": true, 00:04:39.940 "seek_hole": false, 00:04:39.940 "seek_data": false, 00:04:39.940 "copy": true, 00:04:39.940 "nvme_iov_md": false 00:04:39.940 }, 00:04:39.940 "memory_domains": [ 00:04:39.940 { 00:04:39.940 "dma_device_id": "system", 00:04:39.940 "dma_device_type": 1 00:04:39.940 }, 00:04:39.940 { 00:04:39.940 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:39.940 "dma_device_type": 2 00:04:39.940 } 00:04:39.940 ], 00:04:39.940 "driver_specific": {} 00:04:39.940 }, 00:04:39.940 { 00:04:39.940 "name": "Passthru0", 00:04:39.940 "aliases": [ 00:04:39.940 "2a0c5301-b890-5fd3-be8e-c0083ed7be17" 00:04:39.940 ], 00:04:39.940 "product_name": "passthru", 00:04:39.940 "block_size": 512, 00:04:39.940 "num_blocks": 16384, 00:04:39.940 "uuid": "2a0c5301-b890-5fd3-be8e-c0083ed7be17", 00:04:39.940 "assigned_rate_limits": { 00:04:39.940 "rw_ios_per_sec": 0, 00:04:39.940 "rw_mbytes_per_sec": 0, 00:04:39.940 "r_mbytes_per_sec": 0, 00:04:39.940 "w_mbytes_per_sec": 0 00:04:39.940 }, 00:04:39.940 "claimed": false, 00:04:39.940 "zoned": false, 00:04:39.940 "supported_io_types": { 00:04:39.940 "read": true, 00:04:39.940 "write": true, 00:04:39.940 "unmap": true, 00:04:39.940 "flush": true, 00:04:39.940 "reset": true, 00:04:39.940 "nvme_admin": false, 00:04:39.940 "nvme_io": false, 00:04:39.940 "nvme_io_md": false, 00:04:39.940 "write_zeroes": true, 00:04:39.940 "zcopy": true, 00:04:39.940 "get_zone_info": false, 00:04:39.940 "zone_management": false, 00:04:39.940 "zone_append": false, 00:04:39.940 "compare": false, 00:04:39.940 "compare_and_write": false, 00:04:39.940 "abort": true, 00:04:39.940 "seek_hole": false, 00:04:39.940 "seek_data": false, 00:04:39.940 "copy": true, 00:04:39.940 "nvme_iov_md": false 00:04:39.940 }, 00:04:39.940 "memory_domains": [ 00:04:39.940 { 00:04:39.940 "dma_device_id": "system", 00:04:39.940 "dma_device_type": 1 00:04:39.940 }, 00:04:39.940 { 00:04:39.940 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:39.940 "dma_device_type": 2 00:04:39.940 } 00:04:39.940 ], 00:04:39.940 "driver_specific": { 00:04:39.940 "passthru": { 00:04:39.940 "name": "Passthru0", 00:04:39.940 "base_bdev_name": "Malloc2" 00:04:39.940 } 00:04:39.940 } 00:04:39.940 } 00:04:39.940 ]' 00:04:39.940 18:13:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:40.202 18:13:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:40.202 18:13:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:40.202 18:13:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.202 18:13:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.202 18:13:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.202 18:13:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:40.202 18:13:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.202 18:13:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.202 18:13:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.202 18:13:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:40.202 18:13:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.202 18:13:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.202 18:13:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.202 18:13:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:40.202 18:13:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:40.202 ************************************ 00:04:40.202 END TEST rpc_daemon_integrity 00:04:40.202 ************************************ 00:04:40.202 18:13:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:40.202 00:04:40.202 real 0m0.234s 00:04:40.202 user 0m0.130s 00:04:40.202 sys 0m0.028s 00:04:40.202 18:13:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:40.202 18:13:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.202 18:13:58 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:40.202 18:13:58 rpc -- rpc/rpc.sh@84 -- # killprocess 57100 00:04:40.202 18:13:58 rpc -- common/autotest_common.sh@954 -- # '[' -z 57100 ']' 00:04:40.202 18:13:58 rpc -- common/autotest_common.sh@958 -- # kill -0 57100 00:04:40.202 18:13:58 rpc -- common/autotest_common.sh@959 -- # uname 00:04:40.202 18:13:58 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:40.202 18:13:58 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57100 00:04:40.202 killing process with pid 57100 00:04:40.202 18:13:58 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:40.202 18:13:58 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:40.202 18:13:58 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57100' 00:04:40.202 18:13:58 rpc -- common/autotest_common.sh@973 -- # kill 57100 00:04:40.202 18:13:58 rpc -- common/autotest_common.sh@978 -- # wait 57100 00:04:41.627 00:04:41.627 real 0m3.269s 00:04:41.627 user 0m3.684s 00:04:41.627 sys 0m0.622s 00:04:41.627 18:13:59 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.627 ************************************ 00:04:41.627 END TEST rpc 00:04:41.627 ************************************ 00:04:41.627 18:13:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.627 18:13:59 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:41.627 18:13:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:41.627 18:13:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.627 18:13:59 -- common/autotest_common.sh@10 -- # set +x 00:04:41.627 ************************************ 00:04:41.627 START TEST skip_rpc 00:04:41.627 ************************************ 00:04:41.627 18:13:59 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:41.627 * Looking for test storage... 00:04:41.627 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:41.627 18:13:59 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:41.627 18:13:59 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:41.627 18:13:59 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:41.627 18:14:00 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:41.627 18:14:00 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:41.627 18:14:00 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:41.627 18:14:00 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:41.627 18:14:00 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:41.627 18:14:00 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:41.627 18:14:00 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:41.627 18:14:00 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:41.627 18:14:00 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:41.627 18:14:00 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:41.627 18:14:00 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:41.627 18:14:00 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:41.627 18:14:00 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:41.627 18:14:00 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:41.627 18:14:00 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:41.627 18:14:00 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:41.627 18:14:00 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:41.627 18:14:00 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:41.627 18:14:00 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:41.627 18:14:00 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:41.627 18:14:00 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:41.627 18:14:00 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:41.627 18:14:00 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:41.627 18:14:00 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:41.627 18:14:00 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:41.627 18:14:00 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:41.627 18:14:00 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:41.627 18:14:00 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:41.627 18:14:00 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:41.627 18:14:00 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:41.627 18:14:00 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:41.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.627 --rc genhtml_branch_coverage=1 00:04:41.627 --rc genhtml_function_coverage=1 00:04:41.627 --rc genhtml_legend=1 00:04:41.627 --rc geninfo_all_blocks=1 00:04:41.627 --rc geninfo_unexecuted_blocks=1 00:04:41.627 00:04:41.627 ' 00:04:41.627 18:14:00 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:41.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.627 --rc genhtml_branch_coverage=1 00:04:41.627 --rc genhtml_function_coverage=1 00:04:41.627 --rc genhtml_legend=1 00:04:41.627 --rc geninfo_all_blocks=1 00:04:41.627 --rc geninfo_unexecuted_blocks=1 00:04:41.627 00:04:41.627 ' 00:04:41.627 18:14:00 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:41.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.627 --rc genhtml_branch_coverage=1 00:04:41.627 --rc genhtml_function_coverage=1 00:04:41.627 --rc genhtml_legend=1 00:04:41.627 --rc geninfo_all_blocks=1 00:04:41.627 --rc geninfo_unexecuted_blocks=1 00:04:41.627 00:04:41.627 ' 00:04:41.627 18:14:00 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:41.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.627 --rc genhtml_branch_coverage=1 00:04:41.627 --rc genhtml_function_coverage=1 00:04:41.627 --rc genhtml_legend=1 00:04:41.627 --rc geninfo_all_blocks=1 00:04:41.627 --rc geninfo_unexecuted_blocks=1 00:04:41.627 00:04:41.627 ' 00:04:41.627 18:14:00 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:41.627 18:14:00 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:41.627 18:14:00 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:41.627 18:14:00 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:41.627 18:14:00 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.627 18:14:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.627 ************************************ 00:04:41.627 START TEST skip_rpc 00:04:41.627 ************************************ 00:04:41.627 18:14:00 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:41.627 18:14:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57307 00:04:41.627 18:14:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:41.627 18:14:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:41.627 18:14:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:41.627 [2024-11-20 18:14:00.164636] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:04:41.628 [2024-11-20 18:14:00.164779] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57307 ] 00:04:41.884 [2024-11-20 18:14:00.325034] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.884 [2024-11-20 18:14:00.401914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.142 18:14:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:47.142 18:14:05 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:47.142 18:14:05 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:47.142 18:14:05 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:47.142 18:14:05 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:47.142 18:14:05 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:47.142 18:14:05 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:47.142 18:14:05 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:47.142 18:14:05 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:47.142 18:14:05 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.142 18:14:05 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:47.142 18:14:05 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:47.142 18:14:05 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:47.142 18:14:05 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:47.142 18:14:05 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:47.142 18:14:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:47.142 18:14:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57307 00:04:47.142 18:14:05 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57307 ']' 00:04:47.142 18:14:05 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57307 00:04:47.142 18:14:05 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:47.142 18:14:05 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:47.142 18:14:05 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57307 00:04:47.142 18:14:05 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:47.142 18:14:05 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:47.142 18:14:05 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57307' 00:04:47.142 killing process with pid 57307 00:04:47.142 18:14:05 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57307 00:04:47.142 18:14:05 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57307 00:04:47.709 00:04:47.709 ************************************ 00:04:47.709 END TEST skip_rpc 00:04:47.709 ************************************ 00:04:47.709 real 0m6.188s 00:04:47.709 user 0m5.817s 00:04:47.709 sys 0m0.271s 00:04:47.709 18:14:06 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:47.709 18:14:06 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.709 18:14:06 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:47.709 18:14:06 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:47.709 18:14:06 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:47.709 18:14:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.709 ************************************ 00:04:47.709 START TEST skip_rpc_with_json 00:04:47.709 ************************************ 00:04:47.709 18:14:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:47.709 18:14:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:47.709 18:14:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57400 00:04:47.709 18:14:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:47.709 18:14:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57400 00:04:47.709 18:14:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57400 ']' 00:04:47.709 18:14:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:47.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:47.709 18:14:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:47.709 18:14:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:47.709 18:14:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:47.709 18:14:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:47.709 18:14:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:47.967 [2024-11-20 18:14:06.403646] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:04:47.967 [2024-11-20 18:14:06.403753] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57400 ] 00:04:47.967 [2024-11-20 18:14:06.557426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.225 [2024-11-20 18:14:06.633191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.791 18:14:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:48.791 18:14:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:48.791 18:14:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:48.791 18:14:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.791 18:14:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:48.791 [2024-11-20 18:14:07.231268] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:48.791 request: 00:04:48.791 { 00:04:48.791 "trtype": "tcp", 00:04:48.791 "method": "nvmf_get_transports", 00:04:48.791 "req_id": 1 00:04:48.791 } 00:04:48.791 Got JSON-RPC error response 00:04:48.791 response: 00:04:48.791 { 00:04:48.791 "code": -19, 00:04:48.791 "message": "No such device" 00:04:48.791 } 00:04:48.791 18:14:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:48.791 18:14:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:48.791 18:14:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.791 18:14:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:48.791 [2024-11-20 18:14:07.243363] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:48.791 18:14:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.791 18:14:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:48.791 18:14:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.791 18:14:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:48.791 18:14:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.791 18:14:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:48.791 { 00:04:48.791 "subsystems": [ 00:04:48.791 { 00:04:48.791 "subsystem": "fsdev", 00:04:48.791 "config": [ 00:04:48.791 { 00:04:48.791 "method": "fsdev_set_opts", 00:04:48.791 "params": { 00:04:48.791 "fsdev_io_pool_size": 65535, 00:04:48.791 "fsdev_io_cache_size": 256 00:04:48.791 } 00:04:48.791 } 00:04:48.791 ] 00:04:48.791 }, 00:04:48.791 { 00:04:48.791 "subsystem": "keyring", 00:04:48.791 "config": [] 00:04:48.791 }, 00:04:48.791 { 00:04:48.791 "subsystem": "iobuf", 00:04:48.791 "config": [ 00:04:48.791 { 00:04:48.791 "method": "iobuf_set_options", 00:04:48.791 "params": { 00:04:48.792 "small_pool_count": 8192, 00:04:48.792 "large_pool_count": 1024, 00:04:48.792 "small_bufsize": 8192, 00:04:48.792 "large_bufsize": 135168, 00:04:48.792 "enable_numa": false 00:04:48.792 } 00:04:48.792 } 00:04:48.792 ] 00:04:48.792 }, 00:04:48.792 { 00:04:48.792 "subsystem": "sock", 00:04:48.792 "config": [ 00:04:48.792 { 00:04:48.792 "method": "sock_set_default_impl", 00:04:48.792 "params": { 00:04:48.792 "impl_name": "posix" 00:04:48.792 } 00:04:48.792 }, 00:04:48.792 { 00:04:48.792 "method": "sock_impl_set_options", 00:04:48.792 "params": { 00:04:48.792 "impl_name": "ssl", 00:04:48.792 "recv_buf_size": 4096, 00:04:48.792 "send_buf_size": 4096, 00:04:48.792 "enable_recv_pipe": true, 00:04:48.792 "enable_quickack": false, 00:04:48.792 "enable_placement_id": 0, 00:04:48.792 "enable_zerocopy_send_server": true, 00:04:48.792 "enable_zerocopy_send_client": false, 00:04:48.792 "zerocopy_threshold": 0, 00:04:48.792 "tls_version": 0, 00:04:48.792 "enable_ktls": false 00:04:48.792 } 00:04:48.792 }, 00:04:48.792 { 00:04:48.792 "method": "sock_impl_set_options", 00:04:48.792 "params": { 00:04:48.792 "impl_name": "posix", 00:04:48.792 "recv_buf_size": 2097152, 00:04:48.792 "send_buf_size": 2097152, 00:04:48.792 "enable_recv_pipe": true, 00:04:48.792 "enable_quickack": false, 00:04:48.792 "enable_placement_id": 0, 00:04:48.792 "enable_zerocopy_send_server": true, 00:04:48.792 "enable_zerocopy_send_client": false, 00:04:48.792 "zerocopy_threshold": 0, 00:04:48.792 "tls_version": 0, 00:04:48.792 "enable_ktls": false 00:04:48.792 } 00:04:48.792 } 00:04:48.792 ] 00:04:48.792 }, 00:04:48.792 { 00:04:48.792 "subsystem": "vmd", 00:04:48.792 "config": [] 00:04:48.792 }, 00:04:48.792 { 00:04:48.792 "subsystem": "accel", 00:04:48.792 "config": [ 00:04:48.792 { 00:04:48.792 "method": "accel_set_options", 00:04:48.792 "params": { 00:04:48.792 "small_cache_size": 128, 00:04:48.792 "large_cache_size": 16, 00:04:48.792 "task_count": 2048, 00:04:48.792 "sequence_count": 2048, 00:04:48.792 "buf_count": 2048 00:04:48.792 } 00:04:48.792 } 00:04:48.792 ] 00:04:48.792 }, 00:04:48.792 { 00:04:48.792 "subsystem": "bdev", 00:04:48.792 "config": [ 00:04:48.792 { 00:04:48.792 "method": "bdev_set_options", 00:04:48.792 "params": { 00:04:48.792 "bdev_io_pool_size": 65535, 00:04:48.792 "bdev_io_cache_size": 256, 00:04:48.792 "bdev_auto_examine": true, 00:04:48.792 "iobuf_small_cache_size": 128, 00:04:48.792 "iobuf_large_cache_size": 16 00:04:48.792 } 00:04:48.792 }, 00:04:48.792 { 00:04:48.792 "method": "bdev_raid_set_options", 00:04:48.792 "params": { 00:04:48.792 "process_window_size_kb": 1024, 00:04:48.792 "process_max_bandwidth_mb_sec": 0 00:04:48.792 } 00:04:48.792 }, 00:04:48.792 { 00:04:48.792 "method": "bdev_iscsi_set_options", 00:04:48.792 "params": { 00:04:48.792 "timeout_sec": 30 00:04:48.792 } 00:04:48.792 }, 00:04:48.792 { 00:04:48.792 "method": "bdev_nvme_set_options", 00:04:48.792 "params": { 00:04:48.792 "action_on_timeout": "none", 00:04:48.792 "timeout_us": 0, 00:04:48.792 "timeout_admin_us": 0, 00:04:48.792 "keep_alive_timeout_ms": 10000, 00:04:48.792 "arbitration_burst": 0, 00:04:48.792 "low_priority_weight": 0, 00:04:48.792 "medium_priority_weight": 0, 00:04:48.792 "high_priority_weight": 0, 00:04:48.792 "nvme_adminq_poll_period_us": 10000, 00:04:48.792 "nvme_ioq_poll_period_us": 0, 00:04:48.792 "io_queue_requests": 0, 00:04:48.792 "delay_cmd_submit": true, 00:04:48.792 "transport_retry_count": 4, 00:04:48.792 "bdev_retry_count": 3, 00:04:48.792 "transport_ack_timeout": 0, 00:04:48.792 "ctrlr_loss_timeout_sec": 0, 00:04:48.792 "reconnect_delay_sec": 0, 00:04:48.792 "fast_io_fail_timeout_sec": 0, 00:04:48.792 "disable_auto_failback": false, 00:04:48.792 "generate_uuids": false, 00:04:48.792 "transport_tos": 0, 00:04:48.792 "nvme_error_stat": false, 00:04:48.792 "rdma_srq_size": 0, 00:04:48.792 "io_path_stat": false, 00:04:48.792 "allow_accel_sequence": false, 00:04:48.792 "rdma_max_cq_size": 0, 00:04:48.792 "rdma_cm_event_timeout_ms": 0, 00:04:48.792 "dhchap_digests": [ 00:04:48.792 "sha256", 00:04:48.792 "sha384", 00:04:48.792 "sha512" 00:04:48.792 ], 00:04:48.792 "dhchap_dhgroups": [ 00:04:48.792 "null", 00:04:48.792 "ffdhe2048", 00:04:48.792 "ffdhe3072", 00:04:48.792 "ffdhe4096", 00:04:48.792 "ffdhe6144", 00:04:48.792 "ffdhe8192" 00:04:48.792 ] 00:04:48.792 } 00:04:48.792 }, 00:04:48.792 { 00:04:48.792 "method": "bdev_nvme_set_hotplug", 00:04:48.792 "params": { 00:04:48.792 "period_us": 100000, 00:04:48.792 "enable": false 00:04:48.792 } 00:04:48.792 }, 00:04:48.792 { 00:04:48.792 "method": "bdev_wait_for_examine" 00:04:48.792 } 00:04:48.792 ] 00:04:48.792 }, 00:04:48.792 { 00:04:48.792 "subsystem": "scsi", 00:04:48.792 "config": null 00:04:48.792 }, 00:04:48.792 { 00:04:48.792 "subsystem": "scheduler", 00:04:48.792 "config": [ 00:04:48.792 { 00:04:48.792 "method": "framework_set_scheduler", 00:04:48.792 "params": { 00:04:48.792 "name": "static" 00:04:48.792 } 00:04:48.792 } 00:04:48.792 ] 00:04:48.792 }, 00:04:48.792 { 00:04:48.792 "subsystem": "vhost_scsi", 00:04:48.792 "config": [] 00:04:48.792 }, 00:04:48.792 { 00:04:48.792 "subsystem": "vhost_blk", 00:04:48.792 "config": [] 00:04:48.792 }, 00:04:48.792 { 00:04:48.792 "subsystem": "ublk", 00:04:48.792 "config": [] 00:04:48.792 }, 00:04:48.792 { 00:04:48.792 "subsystem": "nbd", 00:04:48.792 "config": [] 00:04:48.792 }, 00:04:48.792 { 00:04:48.792 "subsystem": "nvmf", 00:04:48.792 "config": [ 00:04:48.792 { 00:04:48.792 "method": "nvmf_set_config", 00:04:48.792 "params": { 00:04:48.792 "discovery_filter": "match_any", 00:04:48.792 "admin_cmd_passthru": { 00:04:48.792 "identify_ctrlr": false 00:04:48.792 }, 00:04:48.792 "dhchap_digests": [ 00:04:48.792 "sha256", 00:04:48.792 "sha384", 00:04:48.792 "sha512" 00:04:48.792 ], 00:04:48.792 "dhchap_dhgroups": [ 00:04:48.792 "null", 00:04:48.792 "ffdhe2048", 00:04:48.792 "ffdhe3072", 00:04:48.792 "ffdhe4096", 00:04:48.792 "ffdhe6144", 00:04:48.792 "ffdhe8192" 00:04:48.792 ] 00:04:48.792 } 00:04:48.792 }, 00:04:48.792 { 00:04:48.792 "method": "nvmf_set_max_subsystems", 00:04:48.792 "params": { 00:04:48.792 "max_subsystems": 1024 00:04:48.792 } 00:04:48.792 }, 00:04:48.792 { 00:04:48.792 "method": "nvmf_set_crdt", 00:04:48.792 "params": { 00:04:48.792 "crdt1": 0, 00:04:48.792 "crdt2": 0, 00:04:48.792 "crdt3": 0 00:04:48.792 } 00:04:48.792 }, 00:04:48.792 { 00:04:48.792 "method": "nvmf_create_transport", 00:04:48.792 "params": { 00:04:48.792 "trtype": "TCP", 00:04:48.792 "max_queue_depth": 128, 00:04:48.792 "max_io_qpairs_per_ctrlr": 127, 00:04:48.792 "in_capsule_data_size": 4096, 00:04:48.792 "max_io_size": 131072, 00:04:48.792 "io_unit_size": 131072, 00:04:48.792 "max_aq_depth": 128, 00:04:48.792 "num_shared_buffers": 511, 00:04:48.792 "buf_cache_size": 4294967295, 00:04:48.792 "dif_insert_or_strip": false, 00:04:48.792 "zcopy": false, 00:04:48.792 "c2h_success": true, 00:04:48.792 "sock_priority": 0, 00:04:48.792 "abort_timeout_sec": 1, 00:04:48.792 "ack_timeout": 0, 00:04:48.792 "data_wr_pool_size": 0 00:04:48.792 } 00:04:48.792 } 00:04:48.792 ] 00:04:48.792 }, 00:04:48.792 { 00:04:48.792 "subsystem": "iscsi", 00:04:48.792 "config": [ 00:04:48.792 { 00:04:48.792 "method": "iscsi_set_options", 00:04:48.792 "params": { 00:04:48.792 "node_base": "iqn.2016-06.io.spdk", 00:04:48.792 "max_sessions": 128, 00:04:48.792 "max_connections_per_session": 2, 00:04:48.792 "max_queue_depth": 64, 00:04:48.792 "default_time2wait": 2, 00:04:48.792 "default_time2retain": 20, 00:04:48.792 "first_burst_length": 8192, 00:04:48.792 "immediate_data": true, 00:04:48.792 "allow_duplicated_isid": false, 00:04:48.792 "error_recovery_level": 0, 00:04:48.793 "nop_timeout": 60, 00:04:48.793 "nop_in_interval": 30, 00:04:48.793 "disable_chap": false, 00:04:48.793 "require_chap": false, 00:04:48.793 "mutual_chap": false, 00:04:48.793 "chap_group": 0, 00:04:48.793 "max_large_datain_per_connection": 64, 00:04:48.793 "max_r2t_per_connection": 4, 00:04:48.793 "pdu_pool_size": 36864, 00:04:48.793 "immediate_data_pool_size": 16384, 00:04:48.793 "data_out_pool_size": 2048 00:04:48.793 } 00:04:48.793 } 00:04:48.793 ] 00:04:48.793 } 00:04:48.793 ] 00:04:48.793 } 00:04:48.793 18:14:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:48.793 18:14:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57400 00:04:48.793 18:14:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57400 ']' 00:04:48.793 18:14:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57400 00:04:48.793 18:14:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:48.793 18:14:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:48.793 18:14:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57400 00:04:49.054 killing process with pid 57400 00:04:49.054 18:14:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:49.054 18:14:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:49.054 18:14:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57400' 00:04:49.054 18:14:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57400 00:04:49.054 18:14:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57400 00:04:49.989 18:14:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57440 00:04:49.989 18:14:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:49.989 18:14:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:55.326 18:14:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57440 00:04:55.326 18:14:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57440 ']' 00:04:55.326 18:14:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57440 00:04:55.326 18:14:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:55.326 18:14:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:55.326 18:14:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57440 00:04:55.326 killing process with pid 57440 00:04:55.326 18:14:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:55.326 18:14:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:55.326 18:14:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57440' 00:04:55.326 18:14:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57440 00:04:55.326 18:14:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57440 00:04:56.261 18:14:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:56.261 18:14:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:56.261 00:04:56.261 real 0m8.425s 00:04:56.261 user 0m8.089s 00:04:56.261 sys 0m0.554s 00:04:56.261 18:14:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:56.261 ************************************ 00:04:56.261 END TEST skip_rpc_with_json 00:04:56.261 ************************************ 00:04:56.261 18:14:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:56.261 18:14:14 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:56.261 18:14:14 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:56.261 18:14:14 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:56.261 18:14:14 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.261 ************************************ 00:04:56.261 START TEST skip_rpc_with_delay 00:04:56.261 ************************************ 00:04:56.261 18:14:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:56.261 18:14:14 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:56.261 18:14:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:56.261 18:14:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:56.261 18:14:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:56.261 18:14:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:56.261 18:14:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:56.261 18:14:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:56.261 18:14:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:56.261 18:14:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:56.261 18:14:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:56.261 18:14:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:56.261 18:14:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:56.261 [2024-11-20 18:14:14.884583] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:56.522 ************************************ 00:04:56.522 END TEST skip_rpc_with_delay 00:04:56.522 ************************************ 00:04:56.522 18:14:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:56.522 18:14:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:56.522 18:14:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:56.522 18:14:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:56.522 00:04:56.522 real 0m0.122s 00:04:56.522 user 0m0.066s 00:04:56.522 sys 0m0.055s 00:04:56.522 18:14:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:56.522 18:14:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:56.522 18:14:14 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:56.522 18:14:14 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:56.522 18:14:14 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:56.522 18:14:14 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:56.522 18:14:14 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:56.522 18:14:14 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.522 ************************************ 00:04:56.522 START TEST exit_on_failed_rpc_init 00:04:56.522 ************************************ 00:04:56.522 18:14:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:56.522 18:14:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57557 00:04:56.522 18:14:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57557 00:04:56.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:56.522 18:14:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57557 ']' 00:04:56.522 18:14:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:56.522 18:14:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:56.522 18:14:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:56.522 18:14:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:56.522 18:14:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:56.522 18:14:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:56.522 [2024-11-20 18:14:15.058447] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:04:56.522 [2024-11-20 18:14:15.058556] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57557 ] 00:04:56.780 [2024-11-20 18:14:15.209741] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.780 [2024-11-20 18:14:15.285088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.347 18:14:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:57.347 18:14:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:57.347 18:14:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:57.347 18:14:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:57.347 18:14:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:57.347 18:14:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:57.347 18:14:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:57.347 18:14:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:57.347 18:14:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:57.347 18:14:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:57.347 18:14:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:57.347 18:14:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:57.347 18:14:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:57.347 18:14:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:57.347 18:14:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:57.347 [2024-11-20 18:14:15.920851] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:04:57.347 [2024-11-20 18:14:15.921123] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57575 ] 00:04:57.606 [2024-11-20 18:14:16.080502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.606 [2024-11-20 18:14:16.177505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:57.606 [2024-11-20 18:14:16.177592] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:57.606 [2024-11-20 18:14:16.177605] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:57.606 [2024-11-20 18:14:16.177618] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:57.864 18:14:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:57.864 18:14:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:57.864 18:14:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:57.864 18:14:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:57.864 18:14:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:57.864 18:14:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:57.864 18:14:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:57.864 18:14:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57557 00:04:57.864 18:14:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57557 ']' 00:04:57.864 18:14:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57557 00:04:57.864 18:14:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:57.864 18:14:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:57.864 18:14:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57557 00:04:57.864 killing process with pid 57557 00:04:57.864 18:14:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:57.864 18:14:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:57.864 18:14:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57557' 00:04:57.864 18:14:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57557 00:04:57.864 18:14:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57557 00:04:59.241 00:04:59.241 real 0m2.553s 00:04:59.241 user 0m2.853s 00:04:59.241 sys 0m0.359s 00:04:59.241 18:14:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:59.241 ************************************ 00:04:59.241 END TEST exit_on_failed_rpc_init 00:04:59.241 ************************************ 00:04:59.241 18:14:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:59.241 18:14:17 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:59.241 00:04:59.241 real 0m17.658s 00:04:59.241 user 0m16.972s 00:04:59.241 sys 0m1.407s 00:04:59.241 18:14:17 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:59.241 18:14:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.241 ************************************ 00:04:59.241 END TEST skip_rpc 00:04:59.241 ************************************ 00:04:59.241 18:14:17 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:59.241 18:14:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:59.241 18:14:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:59.241 18:14:17 -- common/autotest_common.sh@10 -- # set +x 00:04:59.241 ************************************ 00:04:59.241 START TEST rpc_client 00:04:59.241 ************************************ 00:04:59.241 18:14:17 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:59.241 * Looking for test storage... 00:04:59.241 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:59.241 18:14:17 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:59.241 18:14:17 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:04:59.241 18:14:17 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:59.241 18:14:17 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:59.241 18:14:17 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:59.241 18:14:17 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:59.241 18:14:17 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:59.241 18:14:17 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:59.241 18:14:17 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:59.241 18:14:17 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:59.241 18:14:17 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:59.241 18:14:17 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:59.241 18:14:17 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:59.241 18:14:17 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:59.241 18:14:17 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:59.241 18:14:17 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:59.241 18:14:17 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:59.241 18:14:17 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:59.241 18:14:17 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:59.241 18:14:17 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:59.241 18:14:17 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:59.241 18:14:17 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:59.241 18:14:17 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:59.241 18:14:17 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:59.241 18:14:17 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:59.241 18:14:17 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:59.241 18:14:17 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:59.241 18:14:17 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:59.241 18:14:17 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:59.241 18:14:17 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:59.241 18:14:17 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:59.241 18:14:17 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:59.241 18:14:17 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:59.241 18:14:17 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:59.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.241 --rc genhtml_branch_coverage=1 00:04:59.241 --rc genhtml_function_coverage=1 00:04:59.241 --rc genhtml_legend=1 00:04:59.241 --rc geninfo_all_blocks=1 00:04:59.241 --rc geninfo_unexecuted_blocks=1 00:04:59.241 00:04:59.241 ' 00:04:59.241 18:14:17 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:59.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.241 --rc genhtml_branch_coverage=1 00:04:59.241 --rc genhtml_function_coverage=1 00:04:59.241 --rc genhtml_legend=1 00:04:59.241 --rc geninfo_all_blocks=1 00:04:59.241 --rc geninfo_unexecuted_blocks=1 00:04:59.241 00:04:59.241 ' 00:04:59.241 18:14:17 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:59.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.241 --rc genhtml_branch_coverage=1 00:04:59.241 --rc genhtml_function_coverage=1 00:04:59.241 --rc genhtml_legend=1 00:04:59.241 --rc geninfo_all_blocks=1 00:04:59.241 --rc geninfo_unexecuted_blocks=1 00:04:59.241 00:04:59.241 ' 00:04:59.241 18:14:17 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:59.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.241 --rc genhtml_branch_coverage=1 00:04:59.241 --rc genhtml_function_coverage=1 00:04:59.241 --rc genhtml_legend=1 00:04:59.241 --rc geninfo_all_blocks=1 00:04:59.241 --rc geninfo_unexecuted_blocks=1 00:04:59.241 00:04:59.241 ' 00:04:59.241 18:14:17 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:59.241 OK 00:04:59.241 18:14:17 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:59.241 ************************************ 00:04:59.241 END TEST rpc_client 00:04:59.241 ************************************ 00:04:59.241 00:04:59.241 real 0m0.189s 00:04:59.241 user 0m0.103s 00:04:59.241 sys 0m0.086s 00:04:59.241 18:14:17 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:59.241 18:14:17 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:59.500 18:14:17 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:59.500 18:14:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:59.500 18:14:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:59.500 18:14:17 -- common/autotest_common.sh@10 -- # set +x 00:04:59.500 ************************************ 00:04:59.500 START TEST json_config 00:04:59.500 ************************************ 00:04:59.500 18:14:17 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:59.500 18:14:17 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:59.500 18:14:17 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:04:59.500 18:14:17 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:59.500 18:14:17 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:59.500 18:14:17 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:59.500 18:14:17 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:59.500 18:14:17 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:59.500 18:14:17 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:59.500 18:14:17 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:59.500 18:14:17 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:59.500 18:14:17 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:59.500 18:14:17 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:59.500 18:14:17 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:59.500 18:14:17 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:59.500 18:14:17 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:59.500 18:14:17 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:59.500 18:14:17 json_config -- scripts/common.sh@345 -- # : 1 00:04:59.500 18:14:17 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:59.500 18:14:17 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:59.500 18:14:17 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:59.500 18:14:17 json_config -- scripts/common.sh@353 -- # local d=1 00:04:59.500 18:14:17 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:59.500 18:14:17 json_config -- scripts/common.sh@355 -- # echo 1 00:04:59.500 18:14:17 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:59.500 18:14:17 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:59.500 18:14:17 json_config -- scripts/common.sh@353 -- # local d=2 00:04:59.500 18:14:17 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:59.500 18:14:17 json_config -- scripts/common.sh@355 -- # echo 2 00:04:59.500 18:14:17 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:59.501 18:14:17 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:59.501 18:14:17 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:59.501 18:14:17 json_config -- scripts/common.sh@368 -- # return 0 00:04:59.501 18:14:17 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:59.501 18:14:17 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:59.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.501 --rc genhtml_branch_coverage=1 00:04:59.501 --rc genhtml_function_coverage=1 00:04:59.501 --rc genhtml_legend=1 00:04:59.501 --rc geninfo_all_blocks=1 00:04:59.501 --rc geninfo_unexecuted_blocks=1 00:04:59.501 00:04:59.501 ' 00:04:59.501 18:14:17 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:59.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.501 --rc genhtml_branch_coverage=1 00:04:59.501 --rc genhtml_function_coverage=1 00:04:59.501 --rc genhtml_legend=1 00:04:59.501 --rc geninfo_all_blocks=1 00:04:59.501 --rc geninfo_unexecuted_blocks=1 00:04:59.501 00:04:59.501 ' 00:04:59.501 18:14:17 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:59.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.501 --rc genhtml_branch_coverage=1 00:04:59.501 --rc genhtml_function_coverage=1 00:04:59.501 --rc genhtml_legend=1 00:04:59.501 --rc geninfo_all_blocks=1 00:04:59.501 --rc geninfo_unexecuted_blocks=1 00:04:59.501 00:04:59.501 ' 00:04:59.501 18:14:17 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:59.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.501 --rc genhtml_branch_coverage=1 00:04:59.501 --rc genhtml_function_coverage=1 00:04:59.501 --rc genhtml_legend=1 00:04:59.501 --rc geninfo_all_blocks=1 00:04:59.501 --rc geninfo_unexecuted_blocks=1 00:04:59.501 00:04:59.501 ' 00:04:59.501 18:14:17 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:59.501 18:14:17 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:59.501 18:14:17 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:59.501 18:14:17 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:59.501 18:14:17 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:59.501 18:14:17 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:59.501 18:14:17 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:59.501 18:14:17 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:59.501 18:14:17 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:59.501 18:14:17 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:59.501 18:14:17 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:59.501 18:14:17 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:59.501 18:14:18 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:aed9d235-300d-4f9e-8b5d-d05d02d268cd 00:04:59.501 18:14:18 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=aed9d235-300d-4f9e-8b5d-d05d02d268cd 00:04:59.501 18:14:18 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:59.501 18:14:18 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:59.501 18:14:18 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:59.501 18:14:18 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:59.501 18:14:18 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:59.501 18:14:18 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:59.501 18:14:18 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:59.501 18:14:18 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:59.501 18:14:18 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:59.501 18:14:18 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.501 18:14:18 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.501 18:14:18 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.501 18:14:18 json_config -- paths/export.sh@5 -- # export PATH 00:04:59.501 18:14:18 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.501 18:14:18 json_config -- nvmf/common.sh@51 -- # : 0 00:04:59.501 18:14:18 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:59.501 18:14:18 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:59.501 18:14:18 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:59.501 18:14:18 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:59.501 18:14:18 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:59.501 18:14:18 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:59.501 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:59.501 18:14:18 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:59.501 18:14:18 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:59.501 18:14:18 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:59.501 18:14:18 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:59.501 18:14:18 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:59.501 18:14:18 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:59.501 18:14:18 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:59.501 18:14:18 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:59.501 WARNING: No tests are enabled so not running JSON configuration tests 00:04:59.501 18:14:18 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:59.501 18:14:18 json_config -- json_config/json_config.sh@28 -- # exit 0 00:04:59.501 00:04:59.501 real 0m0.136s 00:04:59.501 user 0m0.087s 00:04:59.501 sys 0m0.050s 00:04:59.501 18:14:18 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:59.501 18:14:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:59.501 ************************************ 00:04:59.501 END TEST json_config 00:04:59.501 ************************************ 00:04:59.501 18:14:18 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:59.501 18:14:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:59.501 18:14:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:59.501 18:14:18 -- common/autotest_common.sh@10 -- # set +x 00:04:59.501 ************************************ 00:04:59.501 START TEST json_config_extra_key 00:04:59.501 ************************************ 00:04:59.501 18:14:18 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:59.501 18:14:18 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:59.501 18:14:18 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:04:59.501 18:14:18 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:59.761 18:14:18 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:59.761 18:14:18 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:59.761 18:14:18 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:59.761 18:14:18 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:59.761 18:14:18 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:59.761 18:14:18 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:59.761 18:14:18 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:59.761 18:14:18 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:59.761 18:14:18 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:59.761 18:14:18 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:59.761 18:14:18 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:59.761 18:14:18 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:59.761 18:14:18 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:59.761 18:14:18 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:59.761 18:14:18 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:59.761 18:14:18 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:59.761 18:14:18 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:59.761 18:14:18 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:59.761 18:14:18 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:59.761 18:14:18 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:59.761 18:14:18 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:59.761 18:14:18 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:59.761 18:14:18 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:59.761 18:14:18 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:59.761 18:14:18 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:59.761 18:14:18 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:59.761 18:14:18 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:59.761 18:14:18 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:59.761 18:14:18 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:59.761 18:14:18 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:59.761 18:14:18 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:59.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.761 --rc genhtml_branch_coverage=1 00:04:59.761 --rc genhtml_function_coverage=1 00:04:59.761 --rc genhtml_legend=1 00:04:59.761 --rc geninfo_all_blocks=1 00:04:59.761 --rc geninfo_unexecuted_blocks=1 00:04:59.761 00:04:59.761 ' 00:04:59.761 18:14:18 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:59.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.761 --rc genhtml_branch_coverage=1 00:04:59.761 --rc genhtml_function_coverage=1 00:04:59.761 --rc genhtml_legend=1 00:04:59.761 --rc geninfo_all_blocks=1 00:04:59.761 --rc geninfo_unexecuted_blocks=1 00:04:59.761 00:04:59.761 ' 00:04:59.761 18:14:18 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:59.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.761 --rc genhtml_branch_coverage=1 00:04:59.761 --rc genhtml_function_coverage=1 00:04:59.761 --rc genhtml_legend=1 00:04:59.761 --rc geninfo_all_blocks=1 00:04:59.761 --rc geninfo_unexecuted_blocks=1 00:04:59.761 00:04:59.761 ' 00:04:59.761 18:14:18 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:59.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.761 --rc genhtml_branch_coverage=1 00:04:59.761 --rc genhtml_function_coverage=1 00:04:59.761 --rc genhtml_legend=1 00:04:59.761 --rc geninfo_all_blocks=1 00:04:59.761 --rc geninfo_unexecuted_blocks=1 00:04:59.761 00:04:59.761 ' 00:04:59.761 18:14:18 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:59.761 18:14:18 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:59.761 18:14:18 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:59.761 18:14:18 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:59.761 18:14:18 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:59.761 18:14:18 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:59.761 18:14:18 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:59.761 18:14:18 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:59.761 18:14:18 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:59.761 18:14:18 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:59.761 18:14:18 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:59.761 18:14:18 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:59.761 18:14:18 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:aed9d235-300d-4f9e-8b5d-d05d02d268cd 00:04:59.761 18:14:18 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=aed9d235-300d-4f9e-8b5d-d05d02d268cd 00:04:59.761 18:14:18 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:59.761 18:14:18 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:59.761 18:14:18 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:59.761 18:14:18 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:59.761 18:14:18 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:59.761 18:14:18 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:59.761 18:14:18 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:59.761 18:14:18 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:59.761 18:14:18 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:59.761 18:14:18 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.761 18:14:18 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.761 18:14:18 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.761 18:14:18 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:59.761 18:14:18 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.761 18:14:18 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:59.761 18:14:18 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:59.761 18:14:18 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:59.761 18:14:18 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:59.761 18:14:18 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:59.761 18:14:18 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:59.761 18:14:18 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:59.761 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:59.761 18:14:18 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:59.761 18:14:18 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:59.761 18:14:18 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:59.761 18:14:18 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:59.761 18:14:18 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:59.761 18:14:18 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:59.761 18:14:18 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:59.761 18:14:18 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:59.761 18:14:18 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:59.761 18:14:18 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:59.761 18:14:18 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:59.761 18:14:18 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:59.761 18:14:18 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:59.761 18:14:18 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:59.761 INFO: launching applications... 00:04:59.761 18:14:18 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:59.761 18:14:18 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:59.761 18:14:18 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:59.762 18:14:18 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:59.762 18:14:18 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:59.762 18:14:18 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:59.762 18:14:18 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:59.762 18:14:18 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:59.762 18:14:18 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57763 00:04:59.762 18:14:18 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:59.762 Waiting for target to run... 00:04:59.762 18:14:18 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57763 /var/tmp/spdk_tgt.sock 00:04:59.762 18:14:18 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57763 ']' 00:04:59.762 18:14:18 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:59.762 18:14:18 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:59.762 18:14:18 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:59.762 18:14:18 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:59.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:59.762 18:14:18 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:59.762 18:14:18 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:59.762 [2024-11-20 18:14:18.291005] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:04:59.762 [2024-11-20 18:14:18.291321] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57763 ] 00:05:00.022 [2024-11-20 18:14:18.614066] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.280 [2024-11-20 18:14:18.707116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.850 00:05:00.850 INFO: shutting down applications... 00:05:00.850 18:14:19 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:00.850 18:14:19 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:00.850 18:14:19 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:00.850 18:14:19 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:00.850 18:14:19 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:00.850 18:14:19 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:00.850 18:14:19 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:00.850 18:14:19 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57763 ]] 00:05:00.850 18:14:19 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57763 00:05:00.850 18:14:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:00.850 18:14:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:00.850 18:14:19 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57763 00:05:00.850 18:14:19 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:01.110 18:14:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:01.110 18:14:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:01.110 18:14:19 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57763 00:05:01.110 18:14:19 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:01.677 18:14:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:01.677 18:14:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:01.677 18:14:20 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57763 00:05:01.677 18:14:20 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:02.248 18:14:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:02.248 18:14:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:02.248 18:14:20 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57763 00:05:02.248 18:14:20 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:02.879 18:14:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:02.879 18:14:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:02.879 18:14:21 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57763 00:05:02.879 SPDK target shutdown done 00:05:02.879 Success 00:05:02.879 18:14:21 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:02.879 18:14:21 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:02.879 18:14:21 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:02.879 18:14:21 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:02.879 18:14:21 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:02.879 ************************************ 00:05:02.879 END TEST json_config_extra_key 00:05:02.879 ************************************ 00:05:02.879 00:05:02.879 real 0m3.148s 00:05:02.879 user 0m2.746s 00:05:02.879 sys 0m0.385s 00:05:02.879 18:14:21 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:02.879 18:14:21 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:02.880 18:14:21 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:02.880 18:14:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:02.880 18:14:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:02.880 18:14:21 -- common/autotest_common.sh@10 -- # set +x 00:05:02.880 ************************************ 00:05:02.880 START TEST alias_rpc 00:05:02.880 ************************************ 00:05:02.880 18:14:21 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:02.880 * Looking for test storage... 00:05:02.880 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:02.880 18:14:21 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:02.880 18:14:21 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:02.880 18:14:21 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:02.880 18:14:21 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:02.880 18:14:21 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:02.880 18:14:21 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:02.880 18:14:21 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:02.880 18:14:21 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:02.880 18:14:21 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:02.880 18:14:21 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:02.880 18:14:21 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:02.880 18:14:21 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:02.880 18:14:21 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:02.880 18:14:21 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:02.880 18:14:21 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:02.880 18:14:21 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:02.880 18:14:21 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:02.880 18:14:21 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:02.880 18:14:21 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:02.880 18:14:21 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:02.880 18:14:21 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:02.880 18:14:21 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:02.880 18:14:21 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:02.880 18:14:21 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:02.880 18:14:21 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:02.880 18:14:21 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:02.880 18:14:21 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:02.880 18:14:21 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:02.880 18:14:21 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:02.880 18:14:21 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:02.880 18:14:21 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:02.880 18:14:21 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:02.880 18:14:21 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:02.880 18:14:21 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:02.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.880 --rc genhtml_branch_coverage=1 00:05:02.880 --rc genhtml_function_coverage=1 00:05:02.880 --rc genhtml_legend=1 00:05:02.880 --rc geninfo_all_blocks=1 00:05:02.880 --rc geninfo_unexecuted_blocks=1 00:05:02.880 00:05:02.880 ' 00:05:02.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.880 18:14:21 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:02.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.880 --rc genhtml_branch_coverage=1 00:05:02.880 --rc genhtml_function_coverage=1 00:05:02.880 --rc genhtml_legend=1 00:05:02.880 --rc geninfo_all_blocks=1 00:05:02.880 --rc geninfo_unexecuted_blocks=1 00:05:02.880 00:05:02.880 ' 00:05:02.880 18:14:21 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:02.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.880 --rc genhtml_branch_coverage=1 00:05:02.880 --rc genhtml_function_coverage=1 00:05:02.880 --rc genhtml_legend=1 00:05:02.880 --rc geninfo_all_blocks=1 00:05:02.880 --rc geninfo_unexecuted_blocks=1 00:05:02.880 00:05:02.880 ' 00:05:02.880 18:14:21 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:02.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.880 --rc genhtml_branch_coverage=1 00:05:02.880 --rc genhtml_function_coverage=1 00:05:02.880 --rc genhtml_legend=1 00:05:02.880 --rc geninfo_all_blocks=1 00:05:02.880 --rc geninfo_unexecuted_blocks=1 00:05:02.880 00:05:02.880 ' 00:05:02.880 18:14:21 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:02.880 18:14:21 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57856 00:05:02.880 18:14:21 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57856 00:05:02.880 18:14:21 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57856 ']' 00:05:02.880 18:14:21 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.880 18:14:21 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:02.880 18:14:21 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.880 18:14:21 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:02.880 18:14:21 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:02.880 18:14:21 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.880 [2024-11-20 18:14:21.472254] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:05:02.880 [2024-11-20 18:14:21.472922] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57856 ] 00:05:03.141 [2024-11-20 18:14:21.633736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.141 [2024-11-20 18:14:21.730431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.710 18:14:22 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:03.710 18:14:22 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:03.710 18:14:22 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:03.970 18:14:22 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57856 00:05:03.970 18:14:22 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57856 ']' 00:05:03.970 18:14:22 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57856 00:05:03.970 18:14:22 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:03.970 18:14:22 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:03.970 18:14:22 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57856 00:05:03.970 killing process with pid 57856 00:05:03.970 18:14:22 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:03.970 18:14:22 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:03.970 18:14:22 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57856' 00:05:03.970 18:14:22 alias_rpc -- common/autotest_common.sh@973 -- # kill 57856 00:05:03.970 18:14:22 alias_rpc -- common/autotest_common.sh@978 -- # wait 57856 00:05:05.352 ************************************ 00:05:05.352 END TEST alias_rpc 00:05:05.352 ************************************ 00:05:05.352 00:05:05.352 real 0m2.719s 00:05:05.352 user 0m2.804s 00:05:05.352 sys 0m0.401s 00:05:05.352 18:14:23 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.352 18:14:23 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.613 18:14:24 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:05.613 18:14:24 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:05.613 18:14:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:05.613 18:14:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.613 18:14:24 -- common/autotest_common.sh@10 -- # set +x 00:05:05.613 ************************************ 00:05:05.613 START TEST spdkcli_tcp 00:05:05.613 ************************************ 00:05:05.613 18:14:24 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:05.613 * Looking for test storage... 00:05:05.613 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:05.613 18:14:24 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:05.613 18:14:24 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:05.613 18:14:24 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:05.613 18:14:24 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:05.613 18:14:24 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:05.613 18:14:24 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:05.613 18:14:24 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:05.613 18:14:24 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:05.613 18:14:24 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:05.613 18:14:24 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:05.613 18:14:24 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:05.613 18:14:24 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:05.613 18:14:24 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:05.613 18:14:24 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:05.613 18:14:24 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:05.613 18:14:24 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:05.613 18:14:24 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:05.613 18:14:24 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:05.613 18:14:24 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:05.613 18:14:24 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:05.613 18:14:24 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:05.613 18:14:24 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:05.613 18:14:24 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:05.613 18:14:24 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:05.613 18:14:24 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:05.613 18:14:24 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:05.613 18:14:24 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:05.613 18:14:24 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:05.613 18:14:24 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:05.613 18:14:24 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:05.613 18:14:24 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:05.613 18:14:24 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:05.613 18:14:24 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:05.613 18:14:24 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:05.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.613 --rc genhtml_branch_coverage=1 00:05:05.613 --rc genhtml_function_coverage=1 00:05:05.613 --rc genhtml_legend=1 00:05:05.613 --rc geninfo_all_blocks=1 00:05:05.613 --rc geninfo_unexecuted_blocks=1 00:05:05.613 00:05:05.613 ' 00:05:05.613 18:14:24 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:05.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.613 --rc genhtml_branch_coverage=1 00:05:05.613 --rc genhtml_function_coverage=1 00:05:05.613 --rc genhtml_legend=1 00:05:05.613 --rc geninfo_all_blocks=1 00:05:05.613 --rc geninfo_unexecuted_blocks=1 00:05:05.613 00:05:05.613 ' 00:05:05.613 18:14:24 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:05.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.613 --rc genhtml_branch_coverage=1 00:05:05.613 --rc genhtml_function_coverage=1 00:05:05.613 --rc genhtml_legend=1 00:05:05.613 --rc geninfo_all_blocks=1 00:05:05.613 --rc geninfo_unexecuted_blocks=1 00:05:05.613 00:05:05.613 ' 00:05:05.613 18:14:24 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:05.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.613 --rc genhtml_branch_coverage=1 00:05:05.613 --rc genhtml_function_coverage=1 00:05:05.613 --rc genhtml_legend=1 00:05:05.613 --rc geninfo_all_blocks=1 00:05:05.613 --rc geninfo_unexecuted_blocks=1 00:05:05.613 00:05:05.613 ' 00:05:05.613 18:14:24 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:05.613 18:14:24 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:05.613 18:14:24 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:05.613 18:14:24 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:05.613 18:14:24 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:05.613 18:14:24 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:05.613 18:14:24 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:05.613 18:14:24 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:05.613 18:14:24 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:05.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.613 18:14:24 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57953 00:05:05.613 18:14:24 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57953 00:05:05.613 18:14:24 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57953 ']' 00:05:05.613 18:14:24 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.613 18:14:24 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:05.613 18:14:24 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.613 18:14:24 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:05.613 18:14:24 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:05.613 18:14:24 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:05.875 [2024-11-20 18:14:24.282947] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:05:05.875 [2024-11-20 18:14:24.283137] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57953 ] 00:05:05.875 [2024-11-20 18:14:24.456129] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:06.136 [2024-11-20 18:14:24.553996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:06.136 [2024-11-20 18:14:24.554066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.706 18:14:25 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:06.706 18:14:25 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:06.706 18:14:25 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57970 00:05:06.706 18:14:25 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:06.706 18:14:25 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:06.706 [ 00:05:06.706 "bdev_malloc_delete", 00:05:06.706 "bdev_malloc_create", 00:05:06.706 "bdev_null_resize", 00:05:06.706 "bdev_null_delete", 00:05:06.706 "bdev_null_create", 00:05:06.706 "bdev_nvme_cuse_unregister", 00:05:06.706 "bdev_nvme_cuse_register", 00:05:06.706 "bdev_opal_new_user", 00:05:06.706 "bdev_opal_set_lock_state", 00:05:06.706 "bdev_opal_delete", 00:05:06.706 "bdev_opal_get_info", 00:05:06.707 "bdev_opal_create", 00:05:06.707 "bdev_nvme_opal_revert", 00:05:06.707 "bdev_nvme_opal_init", 00:05:06.707 "bdev_nvme_send_cmd", 00:05:06.707 "bdev_nvme_set_keys", 00:05:06.707 "bdev_nvme_get_path_iostat", 00:05:06.707 "bdev_nvme_get_mdns_discovery_info", 00:05:06.707 "bdev_nvme_stop_mdns_discovery", 00:05:06.707 "bdev_nvme_start_mdns_discovery", 00:05:06.707 "bdev_nvme_set_multipath_policy", 00:05:06.707 "bdev_nvme_set_preferred_path", 00:05:06.707 "bdev_nvme_get_io_paths", 00:05:06.707 "bdev_nvme_remove_error_injection", 00:05:06.707 "bdev_nvme_add_error_injection", 00:05:06.707 "bdev_nvme_get_discovery_info", 00:05:06.707 "bdev_nvme_stop_discovery", 00:05:06.707 "bdev_nvme_start_discovery", 00:05:06.707 "bdev_nvme_get_controller_health_info", 00:05:06.707 "bdev_nvme_disable_controller", 00:05:06.707 "bdev_nvme_enable_controller", 00:05:06.707 "bdev_nvme_reset_controller", 00:05:06.707 "bdev_nvme_get_transport_statistics", 00:05:06.707 "bdev_nvme_apply_firmware", 00:05:06.707 "bdev_nvme_detach_controller", 00:05:06.707 "bdev_nvme_get_controllers", 00:05:06.707 "bdev_nvme_attach_controller", 00:05:06.707 "bdev_nvme_set_hotplug", 00:05:06.707 "bdev_nvme_set_options", 00:05:06.707 "bdev_passthru_delete", 00:05:06.707 "bdev_passthru_create", 00:05:06.707 "bdev_lvol_set_parent_bdev", 00:05:06.707 "bdev_lvol_set_parent", 00:05:06.707 "bdev_lvol_check_shallow_copy", 00:05:06.707 "bdev_lvol_start_shallow_copy", 00:05:06.707 "bdev_lvol_grow_lvstore", 00:05:06.707 "bdev_lvol_get_lvols", 00:05:06.707 "bdev_lvol_get_lvstores", 00:05:06.707 "bdev_lvol_delete", 00:05:06.707 "bdev_lvol_set_read_only", 00:05:06.707 "bdev_lvol_resize", 00:05:06.707 "bdev_lvol_decouple_parent", 00:05:06.707 "bdev_lvol_inflate", 00:05:06.707 "bdev_lvol_rename", 00:05:06.707 "bdev_lvol_clone_bdev", 00:05:06.707 "bdev_lvol_clone", 00:05:06.707 "bdev_lvol_snapshot", 00:05:06.707 "bdev_lvol_create", 00:05:06.707 "bdev_lvol_delete_lvstore", 00:05:06.707 "bdev_lvol_rename_lvstore", 00:05:06.707 "bdev_lvol_create_lvstore", 00:05:06.707 "bdev_raid_set_options", 00:05:06.707 "bdev_raid_remove_base_bdev", 00:05:06.707 "bdev_raid_add_base_bdev", 00:05:06.707 "bdev_raid_delete", 00:05:06.707 "bdev_raid_create", 00:05:06.707 "bdev_raid_get_bdevs", 00:05:06.707 "bdev_error_inject_error", 00:05:06.707 "bdev_error_delete", 00:05:06.707 "bdev_error_create", 00:05:06.707 "bdev_split_delete", 00:05:06.707 "bdev_split_create", 00:05:06.707 "bdev_delay_delete", 00:05:06.707 "bdev_delay_create", 00:05:06.707 "bdev_delay_update_latency", 00:05:06.707 "bdev_zone_block_delete", 00:05:06.707 "bdev_zone_block_create", 00:05:06.707 "blobfs_create", 00:05:06.707 "blobfs_detect", 00:05:06.707 "blobfs_set_cache_size", 00:05:06.707 "bdev_xnvme_delete", 00:05:06.707 "bdev_xnvme_create", 00:05:06.707 "bdev_aio_delete", 00:05:06.707 "bdev_aio_rescan", 00:05:06.707 "bdev_aio_create", 00:05:06.707 "bdev_ftl_set_property", 00:05:06.707 "bdev_ftl_get_properties", 00:05:06.707 "bdev_ftl_get_stats", 00:05:06.707 "bdev_ftl_unmap", 00:05:06.707 "bdev_ftl_unload", 00:05:06.707 "bdev_ftl_delete", 00:05:06.707 "bdev_ftl_load", 00:05:06.707 "bdev_ftl_create", 00:05:06.707 "bdev_virtio_attach_controller", 00:05:06.707 "bdev_virtio_scsi_get_devices", 00:05:06.707 "bdev_virtio_detach_controller", 00:05:06.707 "bdev_virtio_blk_set_hotplug", 00:05:06.707 "bdev_iscsi_delete", 00:05:06.707 "bdev_iscsi_create", 00:05:06.707 "bdev_iscsi_set_options", 00:05:06.707 "accel_error_inject_error", 00:05:06.707 "ioat_scan_accel_module", 00:05:06.707 "dsa_scan_accel_module", 00:05:06.707 "iaa_scan_accel_module", 00:05:06.707 "keyring_file_remove_key", 00:05:06.707 "keyring_file_add_key", 00:05:06.707 "keyring_linux_set_options", 00:05:06.707 "fsdev_aio_delete", 00:05:06.707 "fsdev_aio_create", 00:05:06.707 "iscsi_get_histogram", 00:05:06.707 "iscsi_enable_histogram", 00:05:06.707 "iscsi_set_options", 00:05:06.707 "iscsi_get_auth_groups", 00:05:06.707 "iscsi_auth_group_remove_secret", 00:05:06.707 "iscsi_auth_group_add_secret", 00:05:06.707 "iscsi_delete_auth_group", 00:05:06.707 "iscsi_create_auth_group", 00:05:06.707 "iscsi_set_discovery_auth", 00:05:06.707 "iscsi_get_options", 00:05:06.707 "iscsi_target_node_request_logout", 00:05:06.707 "iscsi_target_node_set_redirect", 00:05:06.707 "iscsi_target_node_set_auth", 00:05:06.707 "iscsi_target_node_add_lun", 00:05:06.707 "iscsi_get_stats", 00:05:06.707 "iscsi_get_connections", 00:05:06.707 "iscsi_portal_group_set_auth", 00:05:06.707 "iscsi_start_portal_group", 00:05:06.707 "iscsi_delete_portal_group", 00:05:06.707 "iscsi_create_portal_group", 00:05:06.707 "iscsi_get_portal_groups", 00:05:06.707 "iscsi_delete_target_node", 00:05:06.707 "iscsi_target_node_remove_pg_ig_maps", 00:05:06.707 "iscsi_target_node_add_pg_ig_maps", 00:05:06.707 "iscsi_create_target_node", 00:05:06.707 "iscsi_get_target_nodes", 00:05:06.707 "iscsi_delete_initiator_group", 00:05:06.707 "iscsi_initiator_group_remove_initiators", 00:05:06.707 "iscsi_initiator_group_add_initiators", 00:05:06.707 "iscsi_create_initiator_group", 00:05:06.707 "iscsi_get_initiator_groups", 00:05:06.707 "nvmf_set_crdt", 00:05:06.707 "nvmf_set_config", 00:05:06.707 "nvmf_set_max_subsystems", 00:05:06.707 "nvmf_stop_mdns_prr", 00:05:06.707 "nvmf_publish_mdns_prr", 00:05:06.707 "nvmf_subsystem_get_listeners", 00:05:06.707 "nvmf_subsystem_get_qpairs", 00:05:06.707 "nvmf_subsystem_get_controllers", 00:05:06.707 "nvmf_get_stats", 00:05:06.707 "nvmf_get_transports", 00:05:06.707 "nvmf_create_transport", 00:05:06.707 "nvmf_get_targets", 00:05:06.707 "nvmf_delete_target", 00:05:06.707 "nvmf_create_target", 00:05:06.707 "nvmf_subsystem_allow_any_host", 00:05:06.707 "nvmf_subsystem_set_keys", 00:05:06.707 "nvmf_subsystem_remove_host", 00:05:06.707 "nvmf_subsystem_add_host", 00:05:06.707 "nvmf_ns_remove_host", 00:05:06.707 "nvmf_ns_add_host", 00:05:06.707 "nvmf_subsystem_remove_ns", 00:05:06.707 "nvmf_subsystem_set_ns_ana_group", 00:05:06.707 "nvmf_subsystem_add_ns", 00:05:06.707 "nvmf_subsystem_listener_set_ana_state", 00:05:06.707 "nvmf_discovery_get_referrals", 00:05:06.707 "nvmf_discovery_remove_referral", 00:05:06.707 "nvmf_discovery_add_referral", 00:05:06.707 "nvmf_subsystem_remove_listener", 00:05:06.707 "nvmf_subsystem_add_listener", 00:05:06.707 "nvmf_delete_subsystem", 00:05:06.707 "nvmf_create_subsystem", 00:05:06.707 "nvmf_get_subsystems", 00:05:06.707 "env_dpdk_get_mem_stats", 00:05:06.707 "nbd_get_disks", 00:05:06.707 "nbd_stop_disk", 00:05:06.707 "nbd_start_disk", 00:05:06.707 "ublk_recover_disk", 00:05:06.707 "ublk_get_disks", 00:05:06.707 "ublk_stop_disk", 00:05:06.707 "ublk_start_disk", 00:05:06.707 "ublk_destroy_target", 00:05:06.707 "ublk_create_target", 00:05:06.707 "virtio_blk_create_transport", 00:05:06.707 "virtio_blk_get_transports", 00:05:06.707 "vhost_controller_set_coalescing", 00:05:06.707 "vhost_get_controllers", 00:05:06.707 "vhost_delete_controller", 00:05:06.707 "vhost_create_blk_controller", 00:05:06.707 "vhost_scsi_controller_remove_target", 00:05:06.707 "vhost_scsi_controller_add_target", 00:05:06.707 "vhost_start_scsi_controller", 00:05:06.707 "vhost_create_scsi_controller", 00:05:06.707 "thread_set_cpumask", 00:05:06.707 "scheduler_set_options", 00:05:06.707 "framework_get_governor", 00:05:06.707 "framework_get_scheduler", 00:05:06.707 "framework_set_scheduler", 00:05:06.707 "framework_get_reactors", 00:05:06.707 "thread_get_io_channels", 00:05:06.707 "thread_get_pollers", 00:05:06.707 "thread_get_stats", 00:05:06.707 "framework_monitor_context_switch", 00:05:06.707 "spdk_kill_instance", 00:05:06.707 "log_enable_timestamps", 00:05:06.707 "log_get_flags", 00:05:06.707 "log_clear_flag", 00:05:06.707 "log_set_flag", 00:05:06.707 "log_get_level", 00:05:06.707 "log_set_level", 00:05:06.707 "log_get_print_level", 00:05:06.707 "log_set_print_level", 00:05:06.707 "framework_enable_cpumask_locks", 00:05:06.707 "framework_disable_cpumask_locks", 00:05:06.707 "framework_wait_init", 00:05:06.707 "framework_start_init", 00:05:06.707 "scsi_get_devices", 00:05:06.707 "bdev_get_histogram", 00:05:06.707 "bdev_enable_histogram", 00:05:06.707 "bdev_set_qos_limit", 00:05:06.707 "bdev_set_qd_sampling_period", 00:05:06.707 "bdev_get_bdevs", 00:05:06.707 "bdev_reset_iostat", 00:05:06.707 "bdev_get_iostat", 00:05:06.707 "bdev_examine", 00:05:06.707 "bdev_wait_for_examine", 00:05:06.707 "bdev_set_options", 00:05:06.707 "accel_get_stats", 00:05:06.707 "accel_set_options", 00:05:06.707 "accel_set_driver", 00:05:06.707 "accel_crypto_key_destroy", 00:05:06.707 "accel_crypto_keys_get", 00:05:06.707 "accel_crypto_key_create", 00:05:06.707 "accel_assign_opc", 00:05:06.707 "accel_get_module_info", 00:05:06.707 "accel_get_opc_assignments", 00:05:06.707 "vmd_rescan", 00:05:06.707 "vmd_remove_device", 00:05:06.707 "vmd_enable", 00:05:06.707 "sock_get_default_impl", 00:05:06.707 "sock_set_default_impl", 00:05:06.707 "sock_impl_set_options", 00:05:06.707 "sock_impl_get_options", 00:05:06.707 "iobuf_get_stats", 00:05:06.707 "iobuf_set_options", 00:05:06.707 "keyring_get_keys", 00:05:06.707 "framework_get_pci_devices", 00:05:06.707 "framework_get_config", 00:05:06.707 "framework_get_subsystems", 00:05:06.707 "fsdev_set_opts", 00:05:06.707 "fsdev_get_opts", 00:05:06.707 "trace_get_info", 00:05:06.707 "trace_get_tpoint_group_mask", 00:05:06.707 "trace_disable_tpoint_group", 00:05:06.707 "trace_enable_tpoint_group", 00:05:06.707 "trace_clear_tpoint_mask", 00:05:06.707 "trace_set_tpoint_mask", 00:05:06.707 "notify_get_notifications", 00:05:06.707 "notify_get_types", 00:05:06.707 "spdk_get_version", 00:05:06.707 "rpc_get_methods" 00:05:06.707 ] 00:05:06.967 18:14:25 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:06.967 18:14:25 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:06.967 18:14:25 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:06.967 18:14:25 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:06.967 18:14:25 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57953 00:05:06.967 18:14:25 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57953 ']' 00:05:06.967 18:14:25 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57953 00:05:06.967 18:14:25 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:06.967 18:14:25 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:06.967 18:14:25 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57953 00:05:06.967 killing process with pid 57953 00:05:06.967 18:14:25 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:06.967 18:14:25 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:06.967 18:14:25 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57953' 00:05:06.967 18:14:25 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57953 00:05:06.967 18:14:25 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57953 00:05:08.341 ************************************ 00:05:08.341 END TEST spdkcli_tcp 00:05:08.341 ************************************ 00:05:08.341 00:05:08.341 real 0m2.676s 00:05:08.341 user 0m4.820s 00:05:08.341 sys 0m0.441s 00:05:08.341 18:14:26 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:08.341 18:14:26 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:08.341 18:14:26 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:08.341 18:14:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:08.341 18:14:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:08.341 18:14:26 -- common/autotest_common.sh@10 -- # set +x 00:05:08.341 ************************************ 00:05:08.341 START TEST dpdk_mem_utility 00:05:08.341 ************************************ 00:05:08.341 18:14:26 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:08.341 * Looking for test storage... 00:05:08.341 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:08.341 18:14:26 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:08.341 18:14:26 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:08.342 18:14:26 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:05:08.342 18:14:26 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:08.342 18:14:26 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:08.342 18:14:26 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:08.342 18:14:26 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:08.342 18:14:26 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:08.342 18:14:26 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:08.342 18:14:26 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:08.342 18:14:26 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:08.342 18:14:26 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:08.342 18:14:26 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:08.342 18:14:26 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:08.342 18:14:26 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:08.342 18:14:26 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:08.342 18:14:26 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:08.342 18:14:26 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:08.342 18:14:26 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:08.342 18:14:26 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:08.342 18:14:26 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:08.342 18:14:26 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:08.342 18:14:26 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:08.342 18:14:26 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:08.342 18:14:26 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:08.342 18:14:26 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:08.342 18:14:26 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:08.342 18:14:26 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:08.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:08.342 18:14:26 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:08.342 18:14:26 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:08.342 18:14:26 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:08.342 18:14:26 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:08.342 18:14:26 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:08.342 18:14:26 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:08.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.342 --rc genhtml_branch_coverage=1 00:05:08.342 --rc genhtml_function_coverage=1 00:05:08.342 --rc genhtml_legend=1 00:05:08.342 --rc geninfo_all_blocks=1 00:05:08.342 --rc geninfo_unexecuted_blocks=1 00:05:08.342 00:05:08.342 ' 00:05:08.342 18:14:26 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:08.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.342 --rc genhtml_branch_coverage=1 00:05:08.342 --rc genhtml_function_coverage=1 00:05:08.342 --rc genhtml_legend=1 00:05:08.342 --rc geninfo_all_blocks=1 00:05:08.342 --rc geninfo_unexecuted_blocks=1 00:05:08.342 00:05:08.342 ' 00:05:08.342 18:14:26 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:08.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.342 --rc genhtml_branch_coverage=1 00:05:08.342 --rc genhtml_function_coverage=1 00:05:08.342 --rc genhtml_legend=1 00:05:08.342 --rc geninfo_all_blocks=1 00:05:08.342 --rc geninfo_unexecuted_blocks=1 00:05:08.342 00:05:08.342 ' 00:05:08.342 18:14:26 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:08.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.342 --rc genhtml_branch_coverage=1 00:05:08.342 --rc genhtml_function_coverage=1 00:05:08.342 --rc genhtml_legend=1 00:05:08.342 --rc geninfo_all_blocks=1 00:05:08.342 --rc geninfo_unexecuted_blocks=1 00:05:08.342 00:05:08.342 ' 00:05:08.342 18:14:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:08.342 18:14:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58058 00:05:08.342 18:14:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58058 00:05:08.342 18:14:26 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58058 ']' 00:05:08.342 18:14:26 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:08.342 18:14:26 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:08.342 18:14:26 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:08.342 18:14:26 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:08.342 18:14:26 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:08.342 18:14:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:08.342 [2024-11-20 18:14:26.953119] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:05:08.342 [2024-11-20 18:14:26.953200] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58058 ] 00:05:08.600 [2024-11-20 18:14:27.103676] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.600 [2024-11-20 18:14:27.179382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.165 18:14:27 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:09.165 18:14:27 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:09.165 18:14:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:09.165 18:14:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:09.165 18:14:27 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.165 18:14:27 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:09.166 { 00:05:09.166 "filename": "/tmp/spdk_mem_dump.txt" 00:05:09.166 } 00:05:09.166 18:14:27 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.166 18:14:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:09.425 DPDK memory size 816.000000 MiB in 1 heap(s) 00:05:09.425 1 heaps totaling size 816.000000 MiB 00:05:09.425 size: 816.000000 MiB heap id: 0 00:05:09.425 end heaps---------- 00:05:09.425 9 mempools totaling size 595.772034 MiB 00:05:09.425 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:09.425 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:09.425 size: 92.545471 MiB name: bdev_io_58058 00:05:09.425 size: 50.003479 MiB name: msgpool_58058 00:05:09.425 size: 36.509338 MiB name: fsdev_io_58058 00:05:09.425 size: 21.763794 MiB name: PDU_Pool 00:05:09.425 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:09.425 size: 4.133484 MiB name: evtpool_58058 00:05:09.425 size: 0.026123 MiB name: Session_Pool 00:05:09.425 end mempools------- 00:05:09.425 6 memzones totaling size 4.142822 MiB 00:05:09.425 size: 1.000366 MiB name: RG_ring_0_58058 00:05:09.425 size: 1.000366 MiB name: RG_ring_1_58058 00:05:09.425 size: 1.000366 MiB name: RG_ring_4_58058 00:05:09.425 size: 1.000366 MiB name: RG_ring_5_58058 00:05:09.425 size: 0.125366 MiB name: RG_ring_2_58058 00:05:09.425 size: 0.015991 MiB name: RG_ring_3_58058 00:05:09.425 end memzones------- 00:05:09.425 18:14:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:09.425 heap id: 0 total size: 816.000000 MiB number of busy elements: 328 number of free elements: 18 00:05:09.425 list of free elements. size: 16.788208 MiB 00:05:09.425 element at address: 0x200006400000 with size: 1.995972 MiB 00:05:09.425 element at address: 0x20000a600000 with size: 1.995972 MiB 00:05:09.425 element at address: 0x200003e00000 with size: 1.991028 MiB 00:05:09.425 element at address: 0x200018d00040 with size: 0.999939 MiB 00:05:09.425 element at address: 0x200019100040 with size: 0.999939 MiB 00:05:09.426 element at address: 0x200019200000 with size: 0.999084 MiB 00:05:09.426 element at address: 0x200031e00000 with size: 0.994324 MiB 00:05:09.426 element at address: 0x200000400000 with size: 0.992004 MiB 00:05:09.426 element at address: 0x200018a00000 with size: 0.959656 MiB 00:05:09.426 element at address: 0x200019500040 with size: 0.936401 MiB 00:05:09.426 element at address: 0x200000200000 with size: 0.716980 MiB 00:05:09.426 element at address: 0x20001ac00000 with size: 0.558533 MiB 00:05:09.426 element at address: 0x200000c00000 with size: 0.490173 MiB 00:05:09.426 element at address: 0x200018e00000 with size: 0.487976 MiB 00:05:09.426 element at address: 0x200019600000 with size: 0.485413 MiB 00:05:09.426 element at address: 0x200012c00000 with size: 0.443237 MiB 00:05:09.426 element at address: 0x200028000000 with size: 0.390686 MiB 00:05:09.426 element at address: 0x200000800000 with size: 0.350891 MiB 00:05:09.426 list of standard malloc elements. size: 199.290894 MiB 00:05:09.426 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:05:09.426 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:05:09.426 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:05:09.426 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:05:09.426 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:09.426 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:09.426 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:05:09.426 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:09.426 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:05:09.426 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:05:09.426 element at address: 0x200012bff040 with size: 0.000305 MiB 00:05:09.426 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:09.426 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:09.426 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:05:09.426 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:05:09.426 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:05:09.426 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:05:09.426 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:05:09.426 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:05:09.426 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:05:09.426 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:05:09.426 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:05:09.426 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:05:09.426 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:05:09.426 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:05:09.426 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:05:09.426 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:05:09.426 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:05:09.426 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:05:09.426 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:05:09.426 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:05:09.426 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:05:09.426 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:05:09.426 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:05:09.426 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:05:09.426 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:05:09.426 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:05:09.426 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:05:09.426 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:05:09.426 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:05:09.426 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:05:09.426 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:05:09.426 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:05:09.426 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:05:09.426 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:05:09.426 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:05:09.426 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:05:09.426 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:05:09.426 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:05:09.426 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:05:09.426 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:05:09.426 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:05:09.426 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:05:09.426 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:05:09.426 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:05:09.426 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:05:09.426 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:05:09.426 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:05:09.426 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:05:09.426 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:05:09.426 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:05:09.426 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:05:09.426 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:05:09.426 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:05:09.426 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:05:09.426 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:05:09.426 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:05:09.426 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:05:09.426 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:05:09.426 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:05:09.426 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:05:09.426 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:05:09.426 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:05:09.426 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:05:09.426 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:05:09.426 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:05:09.426 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:05:09.426 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:05:09.426 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:05:09.426 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:05:09.426 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:05:09.426 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:05:09.426 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:05:09.426 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:05:09.426 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:05:09.426 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:05:09.426 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:05:09.426 element at address: 0x200000cff000 with size: 0.000244 MiB 00:05:09.426 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:05:09.426 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:05:09.426 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:05:09.426 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:05:09.426 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:05:09.426 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:05:09.426 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:05:09.426 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:05:09.426 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:05:09.426 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:05:09.426 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:05:09.426 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:05:09.426 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:05:09.426 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:05:09.426 element at address: 0x200012bff180 with size: 0.000244 MiB 00:05:09.426 element at address: 0x200012bff280 with size: 0.000244 MiB 00:05:09.426 element at address: 0x200012bff380 with size: 0.000244 MiB 00:05:09.426 element at address: 0x200012bff480 with size: 0.000244 MiB 00:05:09.426 element at address: 0x200012bff580 with size: 0.000244 MiB 00:05:09.426 element at address: 0x200012bff680 with size: 0.000244 MiB 00:05:09.426 element at address: 0x200012bff780 with size: 0.000244 MiB 00:05:09.426 element at address: 0x200012bff880 with size: 0.000244 MiB 00:05:09.426 element at address: 0x200012bff980 with size: 0.000244 MiB 00:05:09.426 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:05:09.426 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:05:09.426 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:05:09.426 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:05:09.426 element at address: 0x200012c71780 with size: 0.000244 MiB 00:05:09.426 element at address: 0x200012c71880 with size: 0.000244 MiB 00:05:09.426 element at address: 0x200012c71980 with size: 0.000244 MiB 00:05:09.426 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:05:09.426 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:05:09.426 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:05:09.426 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:05:09.426 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:05:09.426 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:05:09.426 element at address: 0x200012c72080 with size: 0.000244 MiB 00:05:09.426 element at address: 0x200012c72180 with size: 0.000244 MiB 00:05:09.426 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:05:09.426 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:05:09.426 element at address: 0x200018e7cec0 with size: 0.000244 MiB 00:05:09.426 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:05:09.426 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:05:09.426 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:05:09.426 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:05:09.426 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:05:09.426 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:05:09.427 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:05:09.427 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac8efc0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac8f0c0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac8f1c0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac8f2c0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac8f3c0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac8f4c0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac8f5c0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac8f6c0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac8f7c0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac8f8c0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac8f9c0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac8fac0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac8fbc0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac8fcc0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac8fdc0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac8fec0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac8ffc0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac900c0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac901c0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:05:09.427 element at address: 0x200028064040 with size: 0.000244 MiB 00:05:09.427 element at address: 0x200028064140 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20002806ae00 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20002806b080 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20002806b180 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20002806b280 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20002806b380 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20002806b480 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20002806b580 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20002806b680 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20002806b780 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20002806b880 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20002806b980 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20002806be80 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20002806c080 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20002806c180 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20002806c280 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20002806c380 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20002806c480 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20002806c580 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20002806c680 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20002806c780 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20002806c880 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20002806c980 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20002806d080 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20002806d180 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20002806d280 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20002806d380 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20002806d480 with size: 0.000244 MiB 00:05:09.427 element at address: 0x20002806d580 with size: 0.000244 MiB 00:05:09.428 element at address: 0x20002806d680 with size: 0.000244 MiB 00:05:09.428 element at address: 0x20002806d780 with size: 0.000244 MiB 00:05:09.428 element at address: 0x20002806d880 with size: 0.000244 MiB 00:05:09.428 element at address: 0x20002806d980 with size: 0.000244 MiB 00:05:09.428 element at address: 0x20002806da80 with size: 0.000244 MiB 00:05:09.428 element at address: 0x20002806db80 with size: 0.000244 MiB 00:05:09.428 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:05:09.428 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:05:09.428 element at address: 0x20002806de80 with size: 0.000244 MiB 00:05:09.428 element at address: 0x20002806df80 with size: 0.000244 MiB 00:05:09.428 element at address: 0x20002806e080 with size: 0.000244 MiB 00:05:09.428 element at address: 0x20002806e180 with size: 0.000244 MiB 00:05:09.428 element at address: 0x20002806e280 with size: 0.000244 MiB 00:05:09.428 element at address: 0x20002806e380 with size: 0.000244 MiB 00:05:09.428 element at address: 0x20002806e480 with size: 0.000244 MiB 00:05:09.428 element at address: 0x20002806e580 with size: 0.000244 MiB 00:05:09.428 element at address: 0x20002806e680 with size: 0.000244 MiB 00:05:09.428 element at address: 0x20002806e780 with size: 0.000244 MiB 00:05:09.428 element at address: 0x20002806e880 with size: 0.000244 MiB 00:05:09.428 element at address: 0x20002806e980 with size: 0.000244 MiB 00:05:09.428 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:05:09.428 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:05:09.428 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:05:09.428 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:05:09.428 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:05:09.428 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:05:09.428 element at address: 0x20002806f080 with size: 0.000244 MiB 00:05:09.428 element at address: 0x20002806f180 with size: 0.000244 MiB 00:05:09.428 element at address: 0x20002806f280 with size: 0.000244 MiB 00:05:09.428 element at address: 0x20002806f380 with size: 0.000244 MiB 00:05:09.428 element at address: 0x20002806f480 with size: 0.000244 MiB 00:05:09.428 element at address: 0x20002806f580 with size: 0.000244 MiB 00:05:09.428 element at address: 0x20002806f680 with size: 0.000244 MiB 00:05:09.428 element at address: 0x20002806f780 with size: 0.000244 MiB 00:05:09.428 element at address: 0x20002806f880 with size: 0.000244 MiB 00:05:09.428 element at address: 0x20002806f980 with size: 0.000244 MiB 00:05:09.428 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:05:09.428 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:05:09.428 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:05:09.428 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:05:09.428 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:05:09.428 list of memzone associated elements. size: 599.920898 MiB 00:05:09.428 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:05:09.428 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:09.428 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:05:09.428 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:09.428 element at address: 0x200012df4740 with size: 92.045105 MiB 00:05:09.428 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_58058_0 00:05:09.428 element at address: 0x200000dff340 with size: 48.003113 MiB 00:05:09.428 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58058_0 00:05:09.428 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:05:09.428 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58058_0 00:05:09.428 element at address: 0x2000197be900 with size: 20.255615 MiB 00:05:09.428 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:09.428 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:05:09.428 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:09.428 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:05:09.428 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58058_0 00:05:09.428 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:05:09.428 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58058 00:05:09.428 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:09.428 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58058 00:05:09.428 element at address: 0x200018efde00 with size: 1.008179 MiB 00:05:09.428 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:09.428 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:05:09.428 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:09.428 element at address: 0x200018afde00 with size: 1.008179 MiB 00:05:09.428 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:09.428 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:05:09.428 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:09.428 element at address: 0x200000cff100 with size: 1.000549 MiB 00:05:09.428 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58058 00:05:09.428 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:05:09.428 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58058 00:05:09.428 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:05:09.428 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58058 00:05:09.428 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:05:09.428 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58058 00:05:09.428 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:05:09.428 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58058 00:05:09.428 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:05:09.428 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58058 00:05:09.428 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:05:09.428 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:09.428 element at address: 0x200012c72280 with size: 0.500549 MiB 00:05:09.428 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:09.428 element at address: 0x20001967c440 with size: 0.250549 MiB 00:05:09.428 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:09.428 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:05:09.428 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58058 00:05:09.428 element at address: 0x20000085df80 with size: 0.125549 MiB 00:05:09.428 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58058 00:05:09.428 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:05:09.428 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:09.428 element at address: 0x200028064240 with size: 0.023804 MiB 00:05:09.428 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:09.428 element at address: 0x200000859d40 with size: 0.016174 MiB 00:05:09.428 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58058 00:05:09.428 element at address: 0x20002806a3c0 with size: 0.002502 MiB 00:05:09.428 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:09.428 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:05:09.428 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58058 00:05:09.428 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:05:09.428 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58058 00:05:09.428 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:05:09.428 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58058 00:05:09.428 element at address: 0x20002806af00 with size: 0.000366 MiB 00:05:09.428 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:09.428 18:14:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:09.428 18:14:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58058 00:05:09.428 18:14:27 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58058 ']' 00:05:09.428 18:14:27 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58058 00:05:09.428 18:14:27 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:09.428 18:14:27 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:09.428 18:14:27 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58058 00:05:09.428 18:14:27 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:09.428 18:14:27 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:09.428 18:14:27 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58058' 00:05:09.428 killing process with pid 58058 00:05:09.428 18:14:27 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58058 00:05:09.428 18:14:27 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58058 00:05:10.804 00:05:10.804 real 0m2.296s 00:05:10.804 user 0m2.324s 00:05:10.804 sys 0m0.344s 00:05:10.804 18:14:29 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:10.804 18:14:29 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:10.804 ************************************ 00:05:10.804 END TEST dpdk_mem_utility 00:05:10.804 ************************************ 00:05:10.804 18:14:29 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:10.804 18:14:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:10.804 18:14:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:10.804 18:14:29 -- common/autotest_common.sh@10 -- # set +x 00:05:10.804 ************************************ 00:05:10.804 START TEST event 00:05:10.804 ************************************ 00:05:10.804 18:14:29 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:10.804 * Looking for test storage... 00:05:10.804 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:10.804 18:14:29 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:10.804 18:14:29 event -- common/autotest_common.sh@1693 -- # lcov --version 00:05:10.804 18:14:29 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:10.804 18:14:29 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:10.804 18:14:29 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:10.804 18:14:29 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:10.804 18:14:29 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:10.804 18:14:29 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:10.804 18:14:29 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:10.804 18:14:29 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:10.804 18:14:29 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:10.804 18:14:29 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:10.804 18:14:29 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:10.804 18:14:29 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:10.804 18:14:29 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:10.804 18:14:29 event -- scripts/common.sh@344 -- # case "$op" in 00:05:10.804 18:14:29 event -- scripts/common.sh@345 -- # : 1 00:05:10.804 18:14:29 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:10.804 18:14:29 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:10.804 18:14:29 event -- scripts/common.sh@365 -- # decimal 1 00:05:10.804 18:14:29 event -- scripts/common.sh@353 -- # local d=1 00:05:10.804 18:14:29 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:10.804 18:14:29 event -- scripts/common.sh@355 -- # echo 1 00:05:10.804 18:14:29 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:10.804 18:14:29 event -- scripts/common.sh@366 -- # decimal 2 00:05:10.804 18:14:29 event -- scripts/common.sh@353 -- # local d=2 00:05:10.804 18:14:29 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:10.804 18:14:29 event -- scripts/common.sh@355 -- # echo 2 00:05:10.804 18:14:29 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:10.804 18:14:29 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:10.804 18:14:29 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:10.804 18:14:29 event -- scripts/common.sh@368 -- # return 0 00:05:10.804 18:14:29 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:10.804 18:14:29 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:10.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.804 --rc genhtml_branch_coverage=1 00:05:10.804 --rc genhtml_function_coverage=1 00:05:10.804 --rc genhtml_legend=1 00:05:10.804 --rc geninfo_all_blocks=1 00:05:10.804 --rc geninfo_unexecuted_blocks=1 00:05:10.804 00:05:10.804 ' 00:05:10.804 18:14:29 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:10.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.804 --rc genhtml_branch_coverage=1 00:05:10.804 --rc genhtml_function_coverage=1 00:05:10.804 --rc genhtml_legend=1 00:05:10.804 --rc geninfo_all_blocks=1 00:05:10.804 --rc geninfo_unexecuted_blocks=1 00:05:10.804 00:05:10.804 ' 00:05:10.804 18:14:29 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:10.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.804 --rc genhtml_branch_coverage=1 00:05:10.804 --rc genhtml_function_coverage=1 00:05:10.804 --rc genhtml_legend=1 00:05:10.804 --rc geninfo_all_blocks=1 00:05:10.804 --rc geninfo_unexecuted_blocks=1 00:05:10.804 00:05:10.804 ' 00:05:10.804 18:14:29 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:10.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.804 --rc genhtml_branch_coverage=1 00:05:10.804 --rc genhtml_function_coverage=1 00:05:10.804 --rc genhtml_legend=1 00:05:10.804 --rc geninfo_all_blocks=1 00:05:10.804 --rc geninfo_unexecuted_blocks=1 00:05:10.804 00:05:10.804 ' 00:05:10.804 18:14:29 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:10.804 18:14:29 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:10.804 18:14:29 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:10.804 18:14:29 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:10.804 18:14:29 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:10.804 18:14:29 event -- common/autotest_common.sh@10 -- # set +x 00:05:10.804 ************************************ 00:05:10.804 START TEST event_perf 00:05:10.804 ************************************ 00:05:10.804 18:14:29 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:10.804 Running I/O for 1 seconds...[2024-11-20 18:14:29.263042] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:05:10.804 [2024-11-20 18:14:29.263240] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58150 ] 00:05:10.804 [2024-11-20 18:14:29.421552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:11.063 [2024-11-20 18:14:29.520863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:11.063 [2024-11-20 18:14:29.521160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:11.063 [2024-11-20 18:14:29.521835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:11.063 [2024-11-20 18:14:29.521960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.466 Running I/O for 1 seconds... 00:05:12.466 lcore 0: 203322 00:05:12.466 lcore 1: 203321 00:05:12.466 lcore 2: 203324 00:05:12.466 lcore 3: 203322 00:05:12.466 done. 00:05:12.466 00:05:12.466 real 0m1.457s 00:05:12.466 user 0m4.247s 00:05:12.466 sys 0m0.089s 00:05:12.466 ************************************ 00:05:12.466 END TEST event_perf 00:05:12.466 ************************************ 00:05:12.466 18:14:30 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.466 18:14:30 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:12.466 18:14:30 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:12.466 18:14:30 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:12.466 18:14:30 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.466 18:14:30 event -- common/autotest_common.sh@10 -- # set +x 00:05:12.466 ************************************ 00:05:12.466 START TEST event_reactor 00:05:12.466 ************************************ 00:05:12.466 18:14:30 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:12.466 [2024-11-20 18:14:30.776994] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:05:12.466 [2024-11-20 18:14:30.777109] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58189 ] 00:05:12.466 [2024-11-20 18:14:30.937232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.466 [2024-11-20 18:14:31.036416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.851 test_start 00:05:13.851 oneshot 00:05:13.851 tick 100 00:05:13.851 tick 100 00:05:13.851 tick 250 00:05:13.851 tick 100 00:05:13.851 tick 100 00:05:13.851 tick 100 00:05:13.851 tick 250 00:05:13.851 tick 500 00:05:13.851 tick 100 00:05:13.851 tick 100 00:05:13.851 tick 250 00:05:13.851 tick 100 00:05:13.851 tick 100 00:05:13.851 test_end 00:05:13.851 ************************************ 00:05:13.851 END TEST event_reactor 00:05:13.851 ************************************ 00:05:13.851 00:05:13.851 real 0m1.440s 00:05:13.851 user 0m1.259s 00:05:13.851 sys 0m0.072s 00:05:13.851 18:14:32 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:13.851 18:14:32 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:13.851 18:14:32 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:13.851 18:14:32 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:13.851 18:14:32 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:13.851 18:14:32 event -- common/autotest_common.sh@10 -- # set +x 00:05:13.851 ************************************ 00:05:13.851 START TEST event_reactor_perf 00:05:13.851 ************************************ 00:05:13.851 18:14:32 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:13.851 [2024-11-20 18:14:32.276331] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:05:13.851 [2024-11-20 18:14:32.276430] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58226 ] 00:05:13.851 [2024-11-20 18:14:32.437115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.112 [2024-11-20 18:14:32.531212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.054 test_start 00:05:15.054 test_end 00:05:15.054 Performance: 317250 events per second 00:05:15.315 00:05:15.315 real 0m1.437s 00:05:15.315 user 0m1.259s 00:05:15.315 sys 0m0.070s 00:05:15.315 18:14:33 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:15.315 ************************************ 00:05:15.315 END TEST event_reactor_perf 00:05:15.315 ************************************ 00:05:15.315 18:14:33 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:15.315 18:14:33 event -- event/event.sh@49 -- # uname -s 00:05:15.315 18:14:33 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:15.315 18:14:33 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:15.315 18:14:33 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:15.315 18:14:33 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:15.315 18:14:33 event -- common/autotest_common.sh@10 -- # set +x 00:05:15.315 ************************************ 00:05:15.315 START TEST event_scheduler 00:05:15.315 ************************************ 00:05:15.315 18:14:33 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:15.315 * Looking for test storage... 00:05:15.315 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:15.315 18:14:33 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:15.315 18:14:33 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:05:15.316 18:14:33 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:15.316 18:14:33 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:15.316 18:14:33 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:15.316 18:14:33 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:15.316 18:14:33 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:15.316 18:14:33 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:15.316 18:14:33 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:15.316 18:14:33 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:15.316 18:14:33 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:15.316 18:14:33 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:15.316 18:14:33 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:15.316 18:14:33 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:15.316 18:14:33 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:15.316 18:14:33 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:15.316 18:14:33 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:15.316 18:14:33 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:15.316 18:14:33 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:15.316 18:14:33 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:15.316 18:14:33 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:15.316 18:14:33 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:15.316 18:14:33 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:15.316 18:14:33 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:15.316 18:14:33 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:15.316 18:14:33 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:15.316 18:14:33 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:15.316 18:14:33 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:15.316 18:14:33 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:15.316 18:14:33 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:15.316 18:14:33 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:15.316 18:14:33 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:15.316 18:14:33 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:15.316 18:14:33 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:15.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.316 --rc genhtml_branch_coverage=1 00:05:15.316 --rc genhtml_function_coverage=1 00:05:15.316 --rc genhtml_legend=1 00:05:15.316 --rc geninfo_all_blocks=1 00:05:15.316 --rc geninfo_unexecuted_blocks=1 00:05:15.316 00:05:15.316 ' 00:05:15.316 18:14:33 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:15.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.316 --rc genhtml_branch_coverage=1 00:05:15.316 --rc genhtml_function_coverage=1 00:05:15.316 --rc genhtml_legend=1 00:05:15.316 --rc geninfo_all_blocks=1 00:05:15.316 --rc geninfo_unexecuted_blocks=1 00:05:15.316 00:05:15.316 ' 00:05:15.316 18:14:33 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:15.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.316 --rc genhtml_branch_coverage=1 00:05:15.316 --rc genhtml_function_coverage=1 00:05:15.316 --rc genhtml_legend=1 00:05:15.316 --rc geninfo_all_blocks=1 00:05:15.316 --rc geninfo_unexecuted_blocks=1 00:05:15.316 00:05:15.316 ' 00:05:15.316 18:14:33 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:15.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.316 --rc genhtml_branch_coverage=1 00:05:15.316 --rc genhtml_function_coverage=1 00:05:15.316 --rc genhtml_legend=1 00:05:15.316 --rc geninfo_all_blocks=1 00:05:15.316 --rc geninfo_unexecuted_blocks=1 00:05:15.316 00:05:15.316 ' 00:05:15.316 18:14:33 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:15.316 18:14:33 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58296 00:05:15.316 18:14:33 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:15.316 18:14:33 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58296 00:05:15.316 18:14:33 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58296 ']' 00:05:15.316 18:14:33 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:15.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.316 18:14:33 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.316 18:14:33 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:15.316 18:14:33 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.316 18:14:33 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:15.316 18:14:33 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:15.579 [2024-11-20 18:14:33.957179] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:05:15.579 [2024-11-20 18:14:33.957300] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58296 ] 00:05:15.579 [2024-11-20 18:14:34.118690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:15.842 [2024-11-20 18:14:34.217996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.842 [2024-11-20 18:14:34.218293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:15.842 [2024-11-20 18:14:34.219508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:15.842 [2024-11-20 18:14:34.219580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:16.415 18:14:34 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:16.415 18:14:34 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:16.415 18:14:34 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:16.415 18:14:34 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.415 18:14:34 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:16.415 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:16.415 POWER: Cannot set governor of lcore 0 to userspace 00:05:16.415 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:16.415 POWER: Cannot set governor of lcore 0 to performance 00:05:16.415 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:16.415 POWER: Cannot set governor of lcore 0 to userspace 00:05:16.415 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:16.415 POWER: Cannot set governor of lcore 0 to userspace 00:05:16.415 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:16.415 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:16.415 POWER: Unable to set Power Management Environment for lcore 0 00:05:16.415 [2024-11-20 18:14:34.805807] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:05:16.415 [2024-11-20 18:14:34.805828] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:05:16.415 [2024-11-20 18:14:34.805837] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:16.415 [2024-11-20 18:14:34.805852] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:16.415 [2024-11-20 18:14:34.805860] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:16.415 [2024-11-20 18:14:34.805868] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:16.415 18:14:34 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.415 18:14:34 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:16.415 18:14:34 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.415 18:14:34 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:16.415 [2024-11-20 18:14:35.025034] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:16.415 18:14:35 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.415 18:14:35 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:16.415 18:14:35 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:16.415 18:14:35 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:16.415 18:14:35 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:16.677 ************************************ 00:05:16.677 START TEST scheduler_create_thread 00:05:16.677 ************************************ 00:05:16.677 18:14:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:16.677 18:14:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:16.677 18:14:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.677 18:14:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:16.677 2 00:05:16.677 18:14:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.677 18:14:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:16.677 18:14:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.677 18:14:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:16.677 3 00:05:16.677 18:14:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.677 18:14:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:16.677 18:14:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.677 18:14:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:16.677 4 00:05:16.677 18:14:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.677 18:14:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:16.677 18:14:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.677 18:14:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:16.677 5 00:05:16.677 18:14:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.677 18:14:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:16.677 18:14:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.677 18:14:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:16.677 6 00:05:16.677 18:14:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.677 18:14:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:16.677 18:14:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.677 18:14:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:16.677 7 00:05:16.677 18:14:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.677 18:14:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:16.677 18:14:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.677 18:14:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:16.677 8 00:05:16.677 18:14:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.677 18:14:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:16.677 18:14:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.677 18:14:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:16.677 9 00:05:16.677 18:14:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.677 18:14:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:16.677 18:14:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.677 18:14:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:16.677 10 00:05:16.677 18:14:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.677 18:14:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:16.677 18:14:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.677 18:14:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:16.677 18:14:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.677 18:14:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:16.677 18:14:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:16.677 18:14:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.677 18:14:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:16.677 18:14:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.677 18:14:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:16.677 18:14:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.677 18:14:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:16.677 18:14:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.677 18:14:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:16.677 18:14:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:16.677 18:14:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.677 18:14:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:16.677 ************************************ 00:05:16.677 END TEST scheduler_create_thread 00:05:16.677 ************************************ 00:05:16.677 18:14:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.677 00:05:16.677 real 0m0.110s 00:05:16.677 user 0m0.014s 00:05:16.677 sys 0m0.003s 00:05:16.677 18:14:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:16.677 18:14:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:16.677 18:14:35 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:16.677 18:14:35 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58296 00:05:16.677 18:14:35 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58296 ']' 00:05:16.677 18:14:35 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58296 00:05:16.677 18:14:35 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:16.677 18:14:35 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:16.677 18:14:35 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58296 00:05:16.677 killing process with pid 58296 00:05:16.677 18:14:35 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:16.677 18:14:35 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:16.678 18:14:35 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58296' 00:05:16.678 18:14:35 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58296 00:05:16.678 18:14:35 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58296 00:05:17.248 [2024-11-20 18:14:35.637592] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:17.815 00:05:17.815 real 0m2.472s 00:05:17.815 user 0m4.301s 00:05:17.815 sys 0m0.327s 00:05:17.815 18:14:36 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:17.815 ************************************ 00:05:17.815 END TEST event_scheduler 00:05:17.815 ************************************ 00:05:17.815 18:14:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:17.815 18:14:36 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:17.815 18:14:36 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:17.815 18:14:36 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:17.815 18:14:36 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:17.815 18:14:36 event -- common/autotest_common.sh@10 -- # set +x 00:05:17.815 ************************************ 00:05:17.815 START TEST app_repeat 00:05:17.815 ************************************ 00:05:17.815 18:14:36 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:17.815 18:14:36 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.815 18:14:36 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.815 18:14:36 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:17.815 18:14:36 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:17.815 18:14:36 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:17.815 18:14:36 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:17.815 18:14:36 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:17.815 Process app_repeat pid: 58375 00:05:17.815 18:14:36 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58375 00:05:17.815 18:14:36 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:17.815 18:14:36 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:17.815 18:14:36 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58375' 00:05:17.815 spdk_app_start Round 0 00:05:17.815 18:14:36 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:17.815 18:14:36 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:17.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:17.815 18:14:36 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58375 /var/tmp/spdk-nbd.sock 00:05:17.815 18:14:36 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58375 ']' 00:05:17.815 18:14:36 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:17.815 18:14:36 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:17.815 18:14:36 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:17.815 18:14:36 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:17.815 18:14:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:17.815 [2024-11-20 18:14:36.314225] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:05:17.816 [2024-11-20 18:14:36.314447] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58375 ] 00:05:18.076 [2024-11-20 18:14:36.471578] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:18.076 [2024-11-20 18:14:36.569121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:18.076 [2024-11-20 18:14:36.569124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.647 18:14:37 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:18.647 18:14:37 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:18.647 18:14:37 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:18.908 Malloc0 00:05:18.908 18:14:37 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:19.168 Malloc1 00:05:19.168 18:14:37 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:19.168 18:14:37 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.168 18:14:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:19.168 18:14:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:19.168 18:14:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:19.168 18:14:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:19.168 18:14:37 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:19.168 18:14:37 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.168 18:14:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:19.168 18:14:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:19.168 18:14:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:19.168 18:14:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:19.168 18:14:37 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:19.168 18:14:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:19.168 18:14:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:19.168 18:14:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:19.168 /dev/nbd0 00:05:19.168 18:14:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:19.168 18:14:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:19.168 18:14:37 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:19.168 18:14:37 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:19.168 18:14:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:19.168 18:14:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:19.168 18:14:37 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:19.168 18:14:37 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:19.168 18:14:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:19.168 18:14:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:19.169 18:14:37 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:19.169 1+0 records in 00:05:19.169 1+0 records out 00:05:19.169 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000311176 s, 13.2 MB/s 00:05:19.169 18:14:37 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:19.427 18:14:37 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:19.427 18:14:37 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:19.427 18:14:37 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:19.427 18:14:37 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:19.427 18:14:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:19.427 18:14:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:19.427 18:14:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:19.427 /dev/nbd1 00:05:19.427 18:14:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:19.427 18:14:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:19.427 18:14:38 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:19.427 18:14:38 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:19.427 18:14:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:19.427 18:14:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:19.427 18:14:38 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:19.427 18:14:38 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:19.427 18:14:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:19.427 18:14:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:19.427 18:14:38 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:19.427 1+0 records in 00:05:19.427 1+0 records out 00:05:19.427 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000232381 s, 17.6 MB/s 00:05:19.427 18:14:38 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:19.427 18:14:38 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:19.428 18:14:38 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:19.428 18:14:38 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:19.428 18:14:38 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:19.428 18:14:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:19.428 18:14:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:19.428 18:14:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:19.428 18:14:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.428 18:14:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:19.688 18:14:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:19.688 { 00:05:19.688 "nbd_device": "/dev/nbd0", 00:05:19.688 "bdev_name": "Malloc0" 00:05:19.688 }, 00:05:19.688 { 00:05:19.688 "nbd_device": "/dev/nbd1", 00:05:19.688 "bdev_name": "Malloc1" 00:05:19.688 } 00:05:19.688 ]' 00:05:19.688 18:14:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:19.688 { 00:05:19.688 "nbd_device": "/dev/nbd0", 00:05:19.688 "bdev_name": "Malloc0" 00:05:19.688 }, 00:05:19.688 { 00:05:19.688 "nbd_device": "/dev/nbd1", 00:05:19.688 "bdev_name": "Malloc1" 00:05:19.688 } 00:05:19.688 ]' 00:05:19.688 18:14:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:19.688 18:14:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:19.688 /dev/nbd1' 00:05:19.688 18:14:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:19.688 /dev/nbd1' 00:05:19.688 18:14:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:19.688 18:14:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:19.688 18:14:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:19.688 18:14:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:19.688 18:14:38 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:19.688 18:14:38 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:19.688 18:14:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:19.688 18:14:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:19.688 18:14:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:19.688 18:14:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:19.688 18:14:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:19.688 18:14:38 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:19.688 256+0 records in 00:05:19.688 256+0 records out 00:05:19.688 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0119344 s, 87.9 MB/s 00:05:19.688 18:14:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:19.688 18:14:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:19.688 256+0 records in 00:05:19.688 256+0 records out 00:05:19.688 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0193098 s, 54.3 MB/s 00:05:19.688 18:14:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:19.688 18:14:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:19.688 256+0 records in 00:05:19.688 256+0 records out 00:05:19.688 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.017294 s, 60.6 MB/s 00:05:19.688 18:14:38 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:19.688 18:14:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:19.688 18:14:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:19.688 18:14:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:19.688 18:14:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:19.688 18:14:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:19.688 18:14:38 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:19.688 18:14:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:19.688 18:14:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:19.688 18:14:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:19.688 18:14:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:19.688 18:14:38 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:19.688 18:14:38 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:19.688 18:14:38 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.688 18:14:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:19.688 18:14:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:19.688 18:14:38 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:19.688 18:14:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:19.688 18:14:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:19.947 18:14:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:19.947 18:14:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:19.947 18:14:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:19.947 18:14:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:19.947 18:14:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:19.947 18:14:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:19.947 18:14:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:19.947 18:14:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:19.947 18:14:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:19.947 18:14:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:20.204 18:14:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:20.204 18:14:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:20.204 18:14:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:20.205 18:14:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:20.205 18:14:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:20.205 18:14:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:20.205 18:14:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:20.205 18:14:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:20.205 18:14:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:20.205 18:14:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:20.205 18:14:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:20.462 18:14:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:20.462 18:14:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:20.462 18:14:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:20.462 18:14:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:20.462 18:14:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:20.462 18:14:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:20.462 18:14:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:20.462 18:14:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:20.462 18:14:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:20.463 18:14:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:20.463 18:14:38 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:20.463 18:14:38 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:20.463 18:14:38 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:20.731 18:14:39 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:21.668 [2024-11-20 18:14:40.059464] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:21.668 [2024-11-20 18:14:40.153658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:21.668 [2024-11-20 18:14:40.153834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.668 [2024-11-20 18:14:40.280346] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:21.668 [2024-11-20 18:14:40.280397] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:24.229 spdk_app_start Round 1 00:05:24.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:24.229 18:14:42 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:24.229 18:14:42 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:24.229 18:14:42 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58375 /var/tmp/spdk-nbd.sock 00:05:24.229 18:14:42 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58375 ']' 00:05:24.229 18:14:42 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:24.229 18:14:42 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:24.229 18:14:42 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:24.229 18:14:42 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:24.229 18:14:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:24.229 18:14:42 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:24.229 18:14:42 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:24.229 18:14:42 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:24.229 Malloc0 00:05:24.229 18:14:42 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:24.488 Malloc1 00:05:24.488 18:14:42 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:24.488 18:14:42 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.488 18:14:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:24.488 18:14:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:24.488 18:14:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.488 18:14:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:24.488 18:14:42 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:24.488 18:14:42 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.488 18:14:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:24.488 18:14:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:24.488 18:14:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.488 18:14:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:24.488 18:14:42 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:24.488 18:14:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:24.488 18:14:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:24.488 18:14:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:24.747 /dev/nbd0 00:05:24.747 18:14:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:24.747 18:14:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:24.747 18:14:43 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:24.747 18:14:43 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:24.747 18:14:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:24.747 18:14:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:24.747 18:14:43 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:24.747 18:14:43 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:24.747 18:14:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:24.747 18:14:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:24.747 18:14:43 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:24.747 1+0 records in 00:05:24.747 1+0 records out 00:05:24.747 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000324209 s, 12.6 MB/s 00:05:24.747 18:14:43 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:24.747 18:14:43 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:24.747 18:14:43 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:24.747 18:14:43 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:24.747 18:14:43 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:24.747 18:14:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:24.747 18:14:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:24.747 18:14:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:25.006 /dev/nbd1 00:05:25.006 18:14:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:25.006 18:14:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:25.006 18:14:43 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:25.006 18:14:43 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:25.006 18:14:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:25.006 18:14:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:25.006 18:14:43 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:25.006 18:14:43 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:25.006 18:14:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:25.006 18:14:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:25.006 18:14:43 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:25.006 1+0 records in 00:05:25.006 1+0 records out 00:05:25.006 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000290895 s, 14.1 MB/s 00:05:25.006 18:14:43 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:25.006 18:14:43 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:25.006 18:14:43 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:25.006 18:14:43 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:25.006 18:14:43 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:25.006 18:14:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:25.006 18:14:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:25.006 18:14:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:25.006 18:14:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.006 18:14:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:25.265 18:14:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:25.265 { 00:05:25.265 "nbd_device": "/dev/nbd0", 00:05:25.265 "bdev_name": "Malloc0" 00:05:25.265 }, 00:05:25.265 { 00:05:25.265 "nbd_device": "/dev/nbd1", 00:05:25.265 "bdev_name": "Malloc1" 00:05:25.265 } 00:05:25.265 ]' 00:05:25.265 18:14:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:25.265 { 00:05:25.265 "nbd_device": "/dev/nbd0", 00:05:25.265 "bdev_name": "Malloc0" 00:05:25.265 }, 00:05:25.265 { 00:05:25.265 "nbd_device": "/dev/nbd1", 00:05:25.265 "bdev_name": "Malloc1" 00:05:25.265 } 00:05:25.265 ]' 00:05:25.265 18:14:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:25.265 18:14:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:25.265 /dev/nbd1' 00:05:25.265 18:14:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:25.265 /dev/nbd1' 00:05:25.265 18:14:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:25.265 18:14:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:25.265 18:14:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:25.265 18:14:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:25.265 18:14:43 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:25.265 18:14:43 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:25.265 18:14:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.265 18:14:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:25.265 18:14:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:25.265 18:14:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:25.265 18:14:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:25.265 18:14:43 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:25.265 256+0 records in 00:05:25.265 256+0 records out 00:05:25.265 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00679541 s, 154 MB/s 00:05:25.265 18:14:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:25.265 18:14:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:25.265 256+0 records in 00:05:25.265 256+0 records out 00:05:25.265 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0181496 s, 57.8 MB/s 00:05:25.265 18:14:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:25.265 18:14:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:25.265 256+0 records in 00:05:25.265 256+0 records out 00:05:25.265 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0197898 s, 53.0 MB/s 00:05:25.265 18:14:43 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:25.266 18:14:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.266 18:14:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:25.266 18:14:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:25.266 18:14:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:25.266 18:14:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:25.266 18:14:43 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:25.266 18:14:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:25.266 18:14:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:25.266 18:14:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:25.266 18:14:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:25.266 18:14:43 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:25.266 18:14:43 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:25.266 18:14:43 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.266 18:14:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.266 18:14:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:25.266 18:14:43 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:25.266 18:14:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:25.266 18:14:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:25.524 18:14:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:25.524 18:14:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:25.524 18:14:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:25.524 18:14:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:25.524 18:14:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:25.524 18:14:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:25.524 18:14:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:25.524 18:14:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:25.524 18:14:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:25.524 18:14:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:25.783 18:14:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:25.783 18:14:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:25.783 18:14:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:25.783 18:14:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:25.783 18:14:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:25.783 18:14:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:25.783 18:14:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:25.783 18:14:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:25.783 18:14:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:25.783 18:14:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.783 18:14:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:26.041 18:14:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:26.041 18:14:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:26.041 18:14:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:26.041 18:14:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:26.041 18:14:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:26.041 18:14:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:26.041 18:14:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:26.041 18:14:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:26.041 18:14:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:26.041 18:14:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:26.041 18:14:44 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:26.041 18:14:44 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:26.041 18:14:44 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:26.299 18:14:44 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:26.864 [2024-11-20 18:14:45.339070] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:26.864 [2024-11-20 18:14:45.434922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:26.864 [2024-11-20 18:14:45.435080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.122 [2024-11-20 18:14:45.544608] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:27.122 [2024-11-20 18:14:45.544671] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:29.654 spdk_app_start Round 2 00:05:29.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:29.654 18:14:47 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:29.654 18:14:47 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:29.654 18:14:47 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58375 /var/tmp/spdk-nbd.sock 00:05:29.654 18:14:47 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58375 ']' 00:05:29.654 18:14:47 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:29.654 18:14:47 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:29.654 18:14:47 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:29.654 18:14:47 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:29.654 18:14:47 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:29.654 18:14:47 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:29.654 18:14:47 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:29.654 18:14:47 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:29.654 Malloc0 00:05:29.654 18:14:48 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:29.913 Malloc1 00:05:29.913 18:14:48 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:29.913 18:14:48 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.913 18:14:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:29.913 18:14:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:29.913 18:14:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.913 18:14:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:29.913 18:14:48 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:29.913 18:14:48 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.913 18:14:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:29.913 18:14:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:29.913 18:14:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.913 18:14:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:29.913 18:14:48 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:29.913 18:14:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:29.913 18:14:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:29.913 18:14:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:30.172 /dev/nbd0 00:05:30.172 18:14:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:30.172 18:14:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:30.172 18:14:48 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:30.172 18:14:48 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:30.172 18:14:48 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:30.172 18:14:48 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:30.172 18:14:48 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:30.172 18:14:48 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:30.172 18:14:48 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:30.172 18:14:48 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:30.172 18:14:48 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:30.172 1+0 records in 00:05:30.172 1+0 records out 00:05:30.172 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000190265 s, 21.5 MB/s 00:05:30.172 18:14:48 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:30.172 18:14:48 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:30.172 18:14:48 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:30.172 18:14:48 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:30.172 18:14:48 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:30.172 18:14:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:30.172 18:14:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:30.172 18:14:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:30.431 /dev/nbd1 00:05:30.431 18:14:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:30.431 18:14:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:30.431 18:14:48 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:30.431 18:14:48 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:30.431 18:14:48 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:30.431 18:14:48 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:30.431 18:14:48 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:30.431 18:14:48 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:30.431 18:14:48 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:30.431 18:14:48 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:30.431 18:14:48 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:30.431 1+0 records in 00:05:30.431 1+0 records out 00:05:30.431 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000193057 s, 21.2 MB/s 00:05:30.431 18:14:48 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:30.431 18:14:48 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:30.431 18:14:48 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:30.431 18:14:48 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:30.431 18:14:48 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:30.431 18:14:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:30.431 18:14:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:30.431 18:14:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:30.431 18:14:48 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.431 18:14:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:30.694 18:14:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:30.694 { 00:05:30.694 "nbd_device": "/dev/nbd0", 00:05:30.694 "bdev_name": "Malloc0" 00:05:30.694 }, 00:05:30.694 { 00:05:30.694 "nbd_device": "/dev/nbd1", 00:05:30.694 "bdev_name": "Malloc1" 00:05:30.694 } 00:05:30.694 ]' 00:05:30.694 18:14:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:30.694 18:14:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:30.694 { 00:05:30.694 "nbd_device": "/dev/nbd0", 00:05:30.694 "bdev_name": "Malloc0" 00:05:30.694 }, 00:05:30.694 { 00:05:30.694 "nbd_device": "/dev/nbd1", 00:05:30.694 "bdev_name": "Malloc1" 00:05:30.694 } 00:05:30.694 ]' 00:05:30.694 18:14:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:30.694 /dev/nbd1' 00:05:30.694 18:14:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:30.694 /dev/nbd1' 00:05:30.694 18:14:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:30.694 18:14:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:30.694 18:14:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:30.694 18:14:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:30.694 18:14:49 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:30.694 18:14:49 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:30.694 18:14:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.694 18:14:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:30.694 18:14:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:30.694 18:14:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:30.694 18:14:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:30.694 18:14:49 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:30.694 256+0 records in 00:05:30.694 256+0 records out 00:05:30.694 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00669106 s, 157 MB/s 00:05:30.694 18:14:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:30.694 18:14:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:30.694 256+0 records in 00:05:30.694 256+0 records out 00:05:30.694 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0163843 s, 64.0 MB/s 00:05:30.694 18:14:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:30.694 18:14:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:30.694 256+0 records in 00:05:30.694 256+0 records out 00:05:30.694 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0173149 s, 60.6 MB/s 00:05:30.694 18:14:49 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:30.694 18:14:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.694 18:14:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:30.694 18:14:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:30.694 18:14:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:30.694 18:14:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:30.694 18:14:49 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:30.694 18:14:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:30.694 18:14:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:30.694 18:14:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:30.694 18:14:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:30.694 18:14:49 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:30.695 18:14:49 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:30.695 18:14:49 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.695 18:14:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.695 18:14:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:30.695 18:14:49 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:30.695 18:14:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:30.695 18:14:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:30.954 18:14:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:30.954 18:14:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:30.954 18:14:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:30.954 18:14:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:30.954 18:14:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:30.954 18:14:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:30.954 18:14:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:30.954 18:14:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:30.954 18:14:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:30.954 18:14:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:31.212 18:14:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:31.212 18:14:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:31.212 18:14:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:31.212 18:14:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:31.212 18:14:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:31.212 18:14:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:31.212 18:14:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:31.212 18:14:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:31.212 18:14:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:31.212 18:14:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.212 18:14:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:31.212 18:14:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:31.212 18:14:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:31.212 18:14:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:31.470 18:14:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:31.470 18:14:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:31.470 18:14:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:31.470 18:14:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:31.470 18:14:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:31.470 18:14:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:31.470 18:14:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:31.470 18:14:49 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:31.470 18:14:49 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:31.470 18:14:49 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:31.728 18:14:50 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:32.294 [2024-11-20 18:14:50.790299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:32.294 [2024-11-20 18:14:50.868176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:32.294 [2024-11-20 18:14:50.868212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.567 [2024-11-20 18:14:50.976246] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:32.567 [2024-11-20 18:14:50.976321] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:35.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:35.109 18:14:53 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58375 /var/tmp/spdk-nbd.sock 00:05:35.109 18:14:53 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58375 ']' 00:05:35.109 18:14:53 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:35.109 18:14:53 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:35.109 18:14:53 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:35.109 18:14:53 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:35.109 18:14:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:35.109 18:14:53 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:35.109 18:14:53 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:35.109 18:14:53 event.app_repeat -- event/event.sh@39 -- # killprocess 58375 00:05:35.109 18:14:53 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58375 ']' 00:05:35.109 18:14:53 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58375 00:05:35.109 18:14:53 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:35.109 18:14:53 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:35.109 18:14:53 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58375 00:05:35.109 killing process with pid 58375 00:05:35.109 18:14:53 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:35.109 18:14:53 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:35.109 18:14:53 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58375' 00:05:35.109 18:14:53 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58375 00:05:35.109 18:14:53 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58375 00:05:35.676 spdk_app_start is called in Round 0. 00:05:35.676 Shutdown signal received, stop current app iteration 00:05:35.676 Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 reinitialization... 00:05:35.676 spdk_app_start is called in Round 1. 00:05:35.676 Shutdown signal received, stop current app iteration 00:05:35.676 Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 reinitialization... 00:05:35.676 spdk_app_start is called in Round 2. 00:05:35.676 Shutdown signal received, stop current app iteration 00:05:35.676 Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 reinitialization... 00:05:35.676 spdk_app_start is called in Round 3. 00:05:35.676 Shutdown signal received, stop current app iteration 00:05:35.676 ************************************ 00:05:35.676 END TEST app_repeat 00:05:35.676 ************************************ 00:05:35.676 18:14:54 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:35.676 18:14:54 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:35.676 00:05:35.676 real 0m17.772s 00:05:35.676 user 0m38.608s 00:05:35.676 sys 0m2.115s 00:05:35.676 18:14:54 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.676 18:14:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:35.676 18:14:54 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:35.676 18:14:54 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:35.676 18:14:54 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:35.676 18:14:54 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.676 18:14:54 event -- common/autotest_common.sh@10 -- # set +x 00:05:35.676 ************************************ 00:05:35.676 START TEST cpu_locks 00:05:35.676 ************************************ 00:05:35.676 18:14:54 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:35.676 * Looking for test storage... 00:05:35.676 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:35.676 18:14:54 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:35.676 18:14:54 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:05:35.676 18:14:54 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:35.676 18:14:54 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:35.676 18:14:54 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:35.676 18:14:54 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:35.676 18:14:54 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:35.676 18:14:54 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:35.676 18:14:54 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:35.676 18:14:54 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:35.676 18:14:54 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:35.676 18:14:54 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:35.676 18:14:54 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:35.676 18:14:54 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:35.676 18:14:54 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:35.676 18:14:54 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:35.676 18:14:54 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:35.676 18:14:54 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:35.676 18:14:54 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:35.676 18:14:54 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:35.676 18:14:54 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:35.676 18:14:54 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:35.676 18:14:54 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:35.676 18:14:54 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:35.676 18:14:54 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:35.676 18:14:54 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:35.676 18:14:54 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:35.676 18:14:54 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:35.676 18:14:54 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:35.676 18:14:54 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:35.676 18:14:54 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:35.676 18:14:54 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:35.676 18:14:54 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:35.676 18:14:54 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:35.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.676 --rc genhtml_branch_coverage=1 00:05:35.676 --rc genhtml_function_coverage=1 00:05:35.676 --rc genhtml_legend=1 00:05:35.676 --rc geninfo_all_blocks=1 00:05:35.676 --rc geninfo_unexecuted_blocks=1 00:05:35.676 00:05:35.676 ' 00:05:35.676 18:14:54 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:35.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.676 --rc genhtml_branch_coverage=1 00:05:35.676 --rc genhtml_function_coverage=1 00:05:35.676 --rc genhtml_legend=1 00:05:35.676 --rc geninfo_all_blocks=1 00:05:35.676 --rc geninfo_unexecuted_blocks=1 00:05:35.676 00:05:35.676 ' 00:05:35.676 18:14:54 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:35.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.676 --rc genhtml_branch_coverage=1 00:05:35.676 --rc genhtml_function_coverage=1 00:05:35.676 --rc genhtml_legend=1 00:05:35.676 --rc geninfo_all_blocks=1 00:05:35.676 --rc geninfo_unexecuted_blocks=1 00:05:35.676 00:05:35.676 ' 00:05:35.676 18:14:54 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:35.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.676 --rc genhtml_branch_coverage=1 00:05:35.676 --rc genhtml_function_coverage=1 00:05:35.676 --rc genhtml_legend=1 00:05:35.676 --rc geninfo_all_blocks=1 00:05:35.676 --rc geninfo_unexecuted_blocks=1 00:05:35.676 00:05:35.676 ' 00:05:35.676 18:14:54 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:35.676 18:14:54 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:35.676 18:14:54 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:35.676 18:14:54 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:35.676 18:14:54 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:35.676 18:14:54 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.676 18:14:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:35.676 ************************************ 00:05:35.676 START TEST default_locks 00:05:35.676 ************************************ 00:05:35.676 18:14:54 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:35.676 18:14:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58800 00:05:35.676 18:14:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:35.676 18:14:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58800 00:05:35.676 18:14:54 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58800 ']' 00:05:35.676 18:14:54 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.676 18:14:54 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:35.676 18:14:54 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.676 18:14:54 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:35.676 18:14:54 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:35.676 [2024-11-20 18:14:54.299998] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:05:35.677 [2024-11-20 18:14:54.300276] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58800 ] 00:05:35.935 [2024-11-20 18:14:54.458851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.935 [2024-11-20 18:14:54.557024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.872 18:14:55 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:36.872 18:14:55 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:36.872 18:14:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58800 00:05:36.872 18:14:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58800 00:05:36.872 18:14:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:36.872 18:14:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58800 00:05:36.872 18:14:55 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58800 ']' 00:05:36.872 18:14:55 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58800 00:05:36.872 18:14:55 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:36.872 18:14:55 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:36.872 18:14:55 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58800 00:05:36.872 killing process with pid 58800 00:05:36.872 18:14:55 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:36.872 18:14:55 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:36.872 18:14:55 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58800' 00:05:36.872 18:14:55 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58800 00:05:36.872 18:14:55 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58800 00:05:38.773 18:14:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58800 00:05:38.773 18:14:56 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:38.773 18:14:56 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58800 00:05:38.773 18:14:56 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:38.773 18:14:56 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:38.773 18:14:56 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:38.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.773 ERROR: process (pid: 58800) is no longer running 00:05:38.773 18:14:56 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:38.773 18:14:56 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58800 00:05:38.773 18:14:56 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58800 ']' 00:05:38.773 18:14:56 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.773 18:14:56 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:38.773 18:14:56 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.773 18:14:56 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:38.773 18:14:56 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:38.773 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58800) - No such process 00:05:38.773 18:14:56 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:38.773 18:14:56 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:38.773 18:14:56 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:38.773 18:14:56 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:38.773 18:14:56 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:38.773 18:14:56 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:38.773 18:14:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:38.773 18:14:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:38.773 18:14:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:38.773 18:14:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:38.773 00:05:38.773 real 0m2.658s 00:05:38.773 user 0m2.637s 00:05:38.773 sys 0m0.446s 00:05:38.773 18:14:56 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:38.773 18:14:56 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:38.773 ************************************ 00:05:38.773 END TEST default_locks 00:05:38.773 ************************************ 00:05:38.773 18:14:56 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:38.773 18:14:56 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:38.773 18:14:56 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:38.773 18:14:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:38.773 ************************************ 00:05:38.773 START TEST default_locks_via_rpc 00:05:38.773 ************************************ 00:05:38.773 18:14:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:38.773 18:14:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58864 00:05:38.773 18:14:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58864 00:05:38.773 18:14:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58864 ']' 00:05:38.773 18:14:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:38.773 18:14:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.773 18:14:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:38.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.773 18:14:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.773 18:14:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:38.773 18:14:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.773 [2024-11-20 18:14:57.002616] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:05:38.773 [2024-11-20 18:14:57.002728] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58864 ] 00:05:38.773 [2024-11-20 18:14:57.157201] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.773 [2024-11-20 18:14:57.233064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.339 18:14:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:39.339 18:14:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:39.339 18:14:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:39.339 18:14:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.339 18:14:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.339 18:14:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.339 18:14:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:39.339 18:14:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:39.339 18:14:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:39.339 18:14:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:39.339 18:14:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:39.339 18:14:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.339 18:14:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.339 18:14:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.339 18:14:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58864 00:05:39.339 18:14:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58864 00:05:39.339 18:14:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:39.597 18:14:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58864 00:05:39.597 18:14:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58864 ']' 00:05:39.597 18:14:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58864 00:05:39.597 18:14:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:39.597 18:14:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:39.597 18:14:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58864 00:05:39.597 18:14:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:39.597 18:14:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:39.597 killing process with pid 58864 00:05:39.597 18:14:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58864' 00:05:39.597 18:14:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58864 00:05:39.597 18:14:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58864 00:05:40.972 00:05:40.972 real 0m2.250s 00:05:40.972 user 0m2.278s 00:05:40.972 sys 0m0.403s 00:05:40.972 18:14:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.972 ************************************ 00:05:40.972 END TEST default_locks_via_rpc 00:05:40.972 ************************************ 00:05:40.972 18:14:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.972 18:14:59 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:40.972 18:14:59 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:40.972 18:14:59 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.972 18:14:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:40.973 ************************************ 00:05:40.973 START TEST non_locking_app_on_locked_coremask 00:05:40.973 ************************************ 00:05:40.973 18:14:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:40.973 18:14:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58916 00:05:40.973 18:14:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58916 /var/tmp/spdk.sock 00:05:40.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.973 18:14:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58916 ']' 00:05:40.973 18:14:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.973 18:14:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:40.973 18:14:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:40.973 18:14:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.973 18:14:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:40.973 18:14:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:40.973 [2024-11-20 18:14:59.306225] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:05:40.973 [2024-11-20 18:14:59.306456] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58916 ] 00:05:40.973 [2024-11-20 18:14:59.449636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.973 [2024-11-20 18:14:59.525715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:41.539 18:15:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:41.539 18:15:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:41.539 18:15:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58932 00:05:41.539 18:15:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58932 /var/tmp/spdk2.sock 00:05:41.539 18:15:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58932 ']' 00:05:41.539 18:15:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:41.539 18:15:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:41.539 18:15:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:41.539 18:15:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:41.539 18:15:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:41.539 18:15:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:41.797 [2024-11-20 18:15:00.215633] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:05:41.797 [2024-11-20 18:15:00.215908] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58932 ] 00:05:41.798 [2024-11-20 18:15:00.383941] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:41.798 [2024-11-20 18:15:00.383986] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.056 [2024-11-20 18:15:00.543516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.059 18:15:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:43.059 18:15:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:43.059 18:15:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58916 00:05:43.059 18:15:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58916 00:05:43.059 18:15:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:43.318 18:15:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58916 00:05:43.318 18:15:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58916 ']' 00:05:43.318 18:15:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58916 00:05:43.318 18:15:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:43.318 18:15:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:43.318 18:15:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58916 00:05:43.318 killing process with pid 58916 00:05:43.318 18:15:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:43.318 18:15:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:43.318 18:15:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58916' 00:05:43.318 18:15:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58916 00:05:43.318 18:15:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58916 00:05:45.848 18:15:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58932 00:05:45.848 18:15:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58932 ']' 00:05:45.848 18:15:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58932 00:05:45.848 18:15:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:45.848 18:15:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:45.848 18:15:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58932 00:05:45.848 killing process with pid 58932 00:05:45.848 18:15:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:45.848 18:15:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:45.848 18:15:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58932' 00:05:45.848 18:15:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58932 00:05:45.848 18:15:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58932 00:05:46.783 00:05:46.783 real 0m6.089s 00:05:46.783 user 0m6.353s 00:05:46.783 sys 0m0.786s 00:05:46.783 18:15:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.783 ************************************ 00:05:46.783 END TEST non_locking_app_on_locked_coremask 00:05:46.783 ************************************ 00:05:46.784 18:15:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:46.784 18:15:05 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:46.784 18:15:05 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:46.784 18:15:05 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.784 18:15:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:46.784 ************************************ 00:05:46.784 START TEST locking_app_on_unlocked_coremask 00:05:46.784 ************************************ 00:05:46.784 18:15:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:46.784 18:15:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59023 00:05:46.784 18:15:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59023 /var/tmp/spdk.sock 00:05:46.784 18:15:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59023 ']' 00:05:46.784 18:15:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.784 18:15:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:46.784 18:15:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:46.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.784 18:15:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.784 18:15:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:46.784 18:15:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:47.043 [2024-11-20 18:15:05.447749] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:05:47.043 [2024-11-20 18:15:05.447867] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59023 ] 00:05:47.043 [2024-11-20 18:15:05.602016] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:47.043 [2024-11-20 18:15:05.602064] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.301 [2024-11-20 18:15:05.679130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:47.869 18:15:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:47.869 18:15:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:47.869 18:15:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:47.869 18:15:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59039 00:05:47.869 18:15:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59039 /var/tmp/spdk2.sock 00:05:47.869 18:15:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59039 ']' 00:05:47.869 18:15:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:47.869 18:15:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:47.869 18:15:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:47.869 18:15:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:47.869 18:15:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:47.869 [2024-11-20 18:15:06.344199] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:05:47.869 [2024-11-20 18:15:06.344483] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59039 ] 00:05:48.127 [2024-11-20 18:15:06.506631] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.127 [2024-11-20 18:15:06.667012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.060 18:15:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:49.060 18:15:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:49.060 18:15:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59039 00:05:49.060 18:15:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59039 00:05:49.060 18:15:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:49.319 18:15:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59023 00:05:49.319 18:15:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59023 ']' 00:05:49.319 18:15:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59023 00:05:49.319 18:15:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:49.319 18:15:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:49.319 18:15:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59023 00:05:49.319 killing process with pid 59023 00:05:49.319 18:15:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:49.319 18:15:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:49.319 18:15:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59023' 00:05:49.319 18:15:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59023 00:05:49.319 18:15:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59023 00:05:51.850 18:15:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59039 00:05:51.850 18:15:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59039 ']' 00:05:51.850 18:15:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59039 00:05:51.850 18:15:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:51.850 18:15:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:51.850 18:15:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59039 00:05:51.850 killing process with pid 59039 00:05:51.850 18:15:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:51.850 18:15:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:51.850 18:15:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59039' 00:05:51.850 18:15:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59039 00:05:51.850 18:15:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59039 00:05:53.248 ************************************ 00:05:53.248 END TEST locking_app_on_unlocked_coremask 00:05:53.248 00:05:53.248 real 0m6.072s 00:05:53.248 user 0m6.337s 00:05:53.248 sys 0m0.818s 00:05:53.248 18:15:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:53.248 18:15:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:53.248 ************************************ 00:05:53.248 18:15:11 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:53.248 18:15:11 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:53.248 18:15:11 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.248 18:15:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:53.248 ************************************ 00:05:53.248 START TEST locking_app_on_locked_coremask 00:05:53.248 ************************************ 00:05:53.248 18:15:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:53.248 18:15:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59130 00:05:53.248 18:15:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59130 /var/tmp/spdk.sock 00:05:53.248 18:15:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:53.248 18:15:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59130 ']' 00:05:53.248 18:15:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.248 18:15:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:53.248 18:15:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.248 18:15:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:53.248 18:15:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:53.248 [2024-11-20 18:15:11.559412] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:05:53.248 [2024-11-20 18:15:11.559573] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59130 ] 00:05:53.248 [2024-11-20 18:15:11.716846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.248 [2024-11-20 18:15:11.794707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.823 18:15:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:53.823 18:15:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:53.823 18:15:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59146 00:05:53.823 18:15:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:53.823 18:15:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59146 /var/tmp/spdk2.sock 00:05:53.823 18:15:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:53.823 18:15:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59146 /var/tmp/spdk2.sock 00:05:53.823 18:15:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:53.823 18:15:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:53.823 18:15:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:53.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:53.823 18:15:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:53.823 18:15:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59146 /var/tmp/spdk2.sock 00:05:53.823 18:15:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59146 ']' 00:05:53.823 18:15:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:53.823 18:15:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:53.823 18:15:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:53.823 18:15:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:53.823 18:15:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:53.823 [2024-11-20 18:15:12.419657] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:05:53.823 [2024-11-20 18:15:12.420222] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59146 ] 00:05:54.081 [2024-11-20 18:15:12.580887] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59130 has claimed it. 00:05:54.081 [2024-11-20 18:15:12.580935] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:54.648 ERROR: process (pid: 59146) is no longer running 00:05:54.648 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59146) - No such process 00:05:54.648 18:15:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:54.648 18:15:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:54.648 18:15:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:54.648 18:15:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:54.648 18:15:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:54.648 18:15:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:54.648 18:15:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59130 00:05:54.648 18:15:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59130 00:05:54.648 18:15:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:54.648 18:15:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59130 00:05:54.648 18:15:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59130 ']' 00:05:54.648 18:15:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59130 00:05:54.648 18:15:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:54.648 18:15:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:54.648 18:15:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59130 00:05:54.648 killing process with pid 59130 00:05:54.648 18:15:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:54.648 18:15:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:54.648 18:15:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59130' 00:05:54.649 18:15:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59130 00:05:54.649 18:15:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59130 00:05:56.024 ************************************ 00:05:56.025 END TEST locking_app_on_locked_coremask 00:05:56.025 ************************************ 00:05:56.025 00:05:56.025 real 0m2.909s 00:05:56.025 user 0m3.119s 00:05:56.025 sys 0m0.482s 00:05:56.025 18:15:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:56.025 18:15:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:56.025 18:15:14 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:56.025 18:15:14 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:56.025 18:15:14 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:56.025 18:15:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:56.025 ************************************ 00:05:56.025 START TEST locking_overlapped_coremask 00:05:56.025 ************************************ 00:05:56.025 18:15:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:56.025 18:15:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59199 00:05:56.025 18:15:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59199 /var/tmp/spdk.sock 00:05:56.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.025 18:15:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59199 ']' 00:05:56.025 18:15:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.025 18:15:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:56.025 18:15:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.025 18:15:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:56.025 18:15:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:56.025 18:15:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:56.025 [2024-11-20 18:15:14.562601] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:05:56.025 [2024-11-20 18:15:14.562771] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59199 ] 00:05:56.285 [2024-11-20 18:15:14.737273] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:56.285 [2024-11-20 18:15:14.816625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:56.285 [2024-11-20 18:15:14.816814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:56.285 [2024-11-20 18:15:14.816851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.852 18:15:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:56.852 18:15:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:56.852 18:15:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59217 00:05:56.852 18:15:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59217 /var/tmp/spdk2.sock 00:05:56.852 18:15:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:56.852 18:15:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:56.852 18:15:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59217 /var/tmp/spdk2.sock 00:05:56.852 18:15:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:56.852 18:15:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:56.852 18:15:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:56.852 18:15:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:56.852 18:15:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59217 /var/tmp/spdk2.sock 00:05:56.852 18:15:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59217 ']' 00:05:56.852 18:15:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:56.852 18:15:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:56.852 18:15:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:56.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:56.852 18:15:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:56.852 18:15:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:57.111 [2024-11-20 18:15:15.483000] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:05:57.111 [2024-11-20 18:15:15.483585] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59217 ] 00:05:57.111 [2024-11-20 18:15:15.665302] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59199 has claimed it. 00:05:57.111 [2024-11-20 18:15:15.665354] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:57.678 ERROR: process (pid: 59217) is no longer running 00:05:57.678 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59217) - No such process 00:05:57.678 18:15:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:57.678 18:15:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:57.678 18:15:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:57.678 18:15:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:57.678 18:15:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:57.678 18:15:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:57.678 18:15:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:57.678 18:15:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:57.678 18:15:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:57.678 18:15:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:57.678 18:15:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59199 00:05:57.678 18:15:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59199 ']' 00:05:57.678 18:15:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59199 00:05:57.678 18:15:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:57.678 18:15:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:57.678 18:15:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59199 00:05:57.678 18:15:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:57.678 killing process with pid 59199 00:05:57.678 18:15:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:57.678 18:15:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59199' 00:05:57.678 18:15:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59199 00:05:57.678 18:15:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59199 00:05:59.054 ************************************ 00:05:59.054 END TEST locking_overlapped_coremask 00:05:59.054 ************************************ 00:05:59.054 00:05:59.054 real 0m2.912s 00:05:59.054 user 0m7.984s 00:05:59.054 sys 0m0.432s 00:05:59.054 18:15:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:59.054 18:15:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:59.054 18:15:17 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:59.054 18:15:17 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:59.054 18:15:17 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:59.054 18:15:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:59.054 ************************************ 00:05:59.054 START TEST locking_overlapped_coremask_via_rpc 00:05:59.054 ************************************ 00:05:59.054 18:15:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:59.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.054 18:15:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59270 00:05:59.054 18:15:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59270 /var/tmp/spdk.sock 00:05:59.054 18:15:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59270 ']' 00:05:59.054 18:15:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.054 18:15:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:59.054 18:15:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.054 18:15:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:59.054 18:15:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.054 18:15:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:59.054 [2024-11-20 18:15:17.478080] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:05:59.054 [2024-11-20 18:15:17.478178] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59270 ] 00:05:59.054 [2024-11-20 18:15:17.626380] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:59.054 [2024-11-20 18:15:17.626415] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:59.312 [2024-11-20 18:15:17.707045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:59.312 [2024-11-20 18:15:17.707238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:59.312 [2024-11-20 18:15:17.707340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.879 18:15:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:59.879 18:15:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:59.879 18:15:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59288 00:05:59.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:59.879 18:15:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59288 /var/tmp/spdk2.sock 00:05:59.879 18:15:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:59.879 18:15:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59288 ']' 00:05:59.879 18:15:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:59.879 18:15:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:59.879 18:15:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:59.879 18:15:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:59.879 18:15:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.879 [2024-11-20 18:15:18.377981] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:05:59.879 [2024-11-20 18:15:18.378115] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59288 ] 00:06:00.137 [2024-11-20 18:15:18.540100] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:00.137 [2024-11-20 18:15:18.540140] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:00.137 [2024-11-20 18:15:18.746955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:00.137 [2024-11-20 18:15:18.747059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:00.137 [2024-11-20 18:15:18.747086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:01.510 18:15:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:01.510 18:15:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:01.510 18:15:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:01.510 18:15:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:01.510 18:15:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.510 18:15:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:01.510 18:15:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:01.510 18:15:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:01.510 18:15:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:01.510 18:15:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:01.510 18:15:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:01.510 18:15:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:01.510 18:15:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:01.510 18:15:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:01.510 18:15:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:01.510 18:15:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.510 [2024-11-20 18:15:19.892219] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59270 has claimed it. 00:06:01.510 request: 00:06:01.510 { 00:06:01.510 "method": "framework_enable_cpumask_locks", 00:06:01.510 "req_id": 1 00:06:01.510 } 00:06:01.510 Got JSON-RPC error response 00:06:01.510 response: 00:06:01.510 { 00:06:01.510 "code": -32603, 00:06:01.510 "message": "Failed to claim CPU core: 2" 00:06:01.510 } 00:06:01.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.510 18:15:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:01.511 18:15:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:01.511 18:15:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:01.511 18:15:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:01.511 18:15:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:01.511 18:15:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59270 /var/tmp/spdk.sock 00:06:01.511 18:15:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59270 ']' 00:06:01.511 18:15:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.511 18:15:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:01.511 18:15:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.511 18:15:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:01.511 18:15:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.511 18:15:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:01.511 18:15:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:01.511 18:15:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59288 /var/tmp/spdk2.sock 00:06:01.511 18:15:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59288 ']' 00:06:01.511 18:15:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:01.511 18:15:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:01.511 18:15:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:01.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:01.511 18:15:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:01.511 18:15:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.768 ************************************ 00:06:01.768 END TEST locking_overlapped_coremask_via_rpc 00:06:01.768 ************************************ 00:06:01.768 18:15:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:01.768 18:15:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:01.768 18:15:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:01.769 18:15:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:01.769 18:15:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:01.769 18:15:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:01.769 00:06:01.769 real 0m2.914s 00:06:01.769 user 0m1.091s 00:06:01.769 sys 0m0.123s 00:06:01.769 18:15:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:01.769 18:15:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.769 18:15:20 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:01.769 18:15:20 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59270 ]] 00:06:01.769 18:15:20 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59270 00:06:01.769 18:15:20 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59270 ']' 00:06:01.769 18:15:20 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59270 00:06:01.769 18:15:20 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:01.769 18:15:20 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:01.769 18:15:20 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59270 00:06:01.769 18:15:20 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:01.769 18:15:20 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:01.769 18:15:20 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59270' 00:06:01.769 killing process with pid 59270 00:06:01.769 18:15:20 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59270 00:06:01.769 18:15:20 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59270 00:06:03.206 18:15:21 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59288 ]] 00:06:03.206 18:15:21 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59288 00:06:03.206 18:15:21 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59288 ']' 00:06:03.206 18:15:21 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59288 00:06:03.206 18:15:21 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:03.206 18:15:21 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:03.206 18:15:21 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59288 00:06:03.206 18:15:21 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:03.206 18:15:21 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:03.206 18:15:21 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59288' 00:06:03.206 killing process with pid 59288 00:06:03.206 18:15:21 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59288 00:06:03.206 18:15:21 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59288 00:06:04.156 18:15:22 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:04.156 Process with pid 59270 is not found 00:06:04.156 Process with pid 59288 is not found 00:06:04.156 18:15:22 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:04.156 18:15:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59270 ]] 00:06:04.156 18:15:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59270 00:06:04.156 18:15:22 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59270 ']' 00:06:04.156 18:15:22 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59270 00:06:04.156 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59270) - No such process 00:06:04.156 18:15:22 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59270 is not found' 00:06:04.156 18:15:22 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59288 ]] 00:06:04.156 18:15:22 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59288 00:06:04.156 18:15:22 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59288 ']' 00:06:04.156 18:15:22 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59288 00:06:04.156 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59288) - No such process 00:06:04.156 18:15:22 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59288 is not found' 00:06:04.156 18:15:22 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:04.156 ************************************ 00:06:04.156 END TEST cpu_locks 00:06:04.156 ************************************ 00:06:04.156 00:06:04.156 real 0m28.630s 00:06:04.156 user 0m49.788s 00:06:04.156 sys 0m4.247s 00:06:04.156 18:15:22 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.156 18:15:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:04.156 ************************************ 00:06:04.156 END TEST event 00:06:04.156 ************************************ 00:06:04.156 00:06:04.156 real 0m53.667s 00:06:04.156 user 1m39.629s 00:06:04.156 sys 0m7.144s 00:06:04.156 18:15:22 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.156 18:15:22 event -- common/autotest_common.sh@10 -- # set +x 00:06:04.417 18:15:22 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:04.417 18:15:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:04.417 18:15:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.417 18:15:22 -- common/autotest_common.sh@10 -- # set +x 00:06:04.417 ************************************ 00:06:04.417 START TEST thread 00:06:04.417 ************************************ 00:06:04.417 18:15:22 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:04.417 * Looking for test storage... 00:06:04.417 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:04.417 18:15:22 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:04.417 18:15:22 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:06:04.417 18:15:22 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:04.417 18:15:22 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:04.417 18:15:22 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:04.417 18:15:22 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:04.417 18:15:22 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:04.417 18:15:22 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:04.417 18:15:22 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:04.417 18:15:22 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:04.417 18:15:22 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:04.417 18:15:22 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:04.417 18:15:22 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:04.417 18:15:22 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:04.417 18:15:22 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:04.417 18:15:22 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:04.417 18:15:22 thread -- scripts/common.sh@345 -- # : 1 00:06:04.417 18:15:22 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:04.417 18:15:22 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:04.417 18:15:22 thread -- scripts/common.sh@365 -- # decimal 1 00:06:04.417 18:15:22 thread -- scripts/common.sh@353 -- # local d=1 00:06:04.417 18:15:22 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:04.417 18:15:22 thread -- scripts/common.sh@355 -- # echo 1 00:06:04.417 18:15:22 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:04.417 18:15:22 thread -- scripts/common.sh@366 -- # decimal 2 00:06:04.417 18:15:22 thread -- scripts/common.sh@353 -- # local d=2 00:06:04.417 18:15:22 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:04.417 18:15:22 thread -- scripts/common.sh@355 -- # echo 2 00:06:04.417 18:15:22 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:04.417 18:15:22 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:04.417 18:15:22 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:04.417 18:15:22 thread -- scripts/common.sh@368 -- # return 0 00:06:04.417 18:15:22 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:04.417 18:15:22 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:04.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.417 --rc genhtml_branch_coverage=1 00:06:04.417 --rc genhtml_function_coverage=1 00:06:04.417 --rc genhtml_legend=1 00:06:04.417 --rc geninfo_all_blocks=1 00:06:04.417 --rc geninfo_unexecuted_blocks=1 00:06:04.417 00:06:04.417 ' 00:06:04.417 18:15:22 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:04.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.417 --rc genhtml_branch_coverage=1 00:06:04.417 --rc genhtml_function_coverage=1 00:06:04.418 --rc genhtml_legend=1 00:06:04.418 --rc geninfo_all_blocks=1 00:06:04.418 --rc geninfo_unexecuted_blocks=1 00:06:04.418 00:06:04.418 ' 00:06:04.418 18:15:22 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:04.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.418 --rc genhtml_branch_coverage=1 00:06:04.418 --rc genhtml_function_coverage=1 00:06:04.418 --rc genhtml_legend=1 00:06:04.418 --rc geninfo_all_blocks=1 00:06:04.418 --rc geninfo_unexecuted_blocks=1 00:06:04.418 00:06:04.418 ' 00:06:04.418 18:15:22 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:04.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.418 --rc genhtml_branch_coverage=1 00:06:04.418 --rc genhtml_function_coverage=1 00:06:04.418 --rc genhtml_legend=1 00:06:04.418 --rc geninfo_all_blocks=1 00:06:04.418 --rc geninfo_unexecuted_blocks=1 00:06:04.418 00:06:04.418 ' 00:06:04.418 18:15:22 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:04.418 18:15:22 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:04.418 18:15:22 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.418 18:15:22 thread -- common/autotest_common.sh@10 -- # set +x 00:06:04.418 ************************************ 00:06:04.418 START TEST thread_poller_perf 00:06:04.418 ************************************ 00:06:04.418 18:15:22 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:04.418 [2024-11-20 18:15:22.967912] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:06:04.418 [2024-11-20 18:15:22.968185] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59437 ] 00:06:04.677 [2024-11-20 18:15:23.124217] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.677 [2024-11-20 18:15:23.198945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.677 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:06.051 [2024-11-20T18:15:24.680Z] ====================================== 00:06:06.051 [2024-11-20T18:15:24.680Z] busy:2607916504 (cyc) 00:06:06.051 [2024-11-20T18:15:24.680Z] total_run_count: 402000 00:06:06.051 [2024-11-20T18:15:24.680Z] tsc_hz: 2600000000 (cyc) 00:06:06.051 [2024-11-20T18:15:24.680Z] ====================================== 00:06:06.051 [2024-11-20T18:15:24.680Z] poller_cost: 6487 (cyc), 2495 (nsec) 00:06:06.051 00:06:06.051 real 0m1.387s 00:06:06.051 user 0m1.214s 00:06:06.051 sys 0m0.066s 00:06:06.051 18:15:24 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.051 18:15:24 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:06.051 ************************************ 00:06:06.051 END TEST thread_poller_perf 00:06:06.051 ************************************ 00:06:06.051 18:15:24 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:06.051 18:15:24 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:06.051 18:15:24 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.051 18:15:24 thread -- common/autotest_common.sh@10 -- # set +x 00:06:06.051 ************************************ 00:06:06.051 START TEST thread_poller_perf 00:06:06.051 ************************************ 00:06:06.052 18:15:24 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:06.052 [2024-11-20 18:15:24.397910] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:06:06.052 [2024-11-20 18:15:24.398020] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59474 ] 00:06:06.052 [2024-11-20 18:15:24.556659] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.052 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:06.052 [2024-11-20 18:15:24.631058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.429 [2024-11-20T18:15:26.058Z] ====================================== 00:06:07.429 [2024-11-20T18:15:26.058Z] busy:2602298532 (cyc) 00:06:07.429 [2024-11-20T18:15:26.058Z] total_run_count: 5290000 00:06:07.429 [2024-11-20T18:15:26.058Z] tsc_hz: 2600000000 (cyc) 00:06:07.429 [2024-11-20T18:15:26.058Z] ====================================== 00:06:07.429 [2024-11-20T18:15:26.058Z] poller_cost: 491 (cyc), 188 (nsec) 00:06:07.429 00:06:07.429 real 0m1.385s 00:06:07.429 user 0m1.212s 00:06:07.429 sys 0m0.067s 00:06:07.429 18:15:25 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:07.429 ************************************ 00:06:07.429 END TEST thread_poller_perf 00:06:07.429 ************************************ 00:06:07.429 18:15:25 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:07.429 18:15:25 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:07.429 ************************************ 00:06:07.429 END TEST thread 00:06:07.429 ************************************ 00:06:07.429 00:06:07.429 real 0m2.979s 00:06:07.429 user 0m2.517s 00:06:07.429 sys 0m0.238s 00:06:07.429 18:15:25 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:07.429 18:15:25 thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.429 18:15:25 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:07.429 18:15:25 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:07.429 18:15:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:07.429 18:15:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.429 18:15:25 -- common/autotest_common.sh@10 -- # set +x 00:06:07.429 ************************************ 00:06:07.429 START TEST app_cmdline 00:06:07.429 ************************************ 00:06:07.429 18:15:25 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:07.429 * Looking for test storage... 00:06:07.429 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:07.429 18:15:25 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:07.429 18:15:25 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:06:07.429 18:15:25 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:07.429 18:15:25 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:07.429 18:15:25 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:07.429 18:15:25 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:07.429 18:15:25 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:07.429 18:15:25 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:07.429 18:15:25 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:07.429 18:15:25 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:07.429 18:15:25 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:07.429 18:15:25 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:07.429 18:15:25 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:07.429 18:15:25 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:07.429 18:15:25 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:07.429 18:15:25 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:07.430 18:15:25 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:07.430 18:15:25 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:07.430 18:15:25 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:07.430 18:15:25 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:07.430 18:15:25 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:07.430 18:15:25 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:07.430 18:15:25 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:07.430 18:15:25 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:07.430 18:15:25 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:07.430 18:15:25 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:07.430 18:15:25 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:07.430 18:15:25 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:07.430 18:15:25 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:07.430 18:15:25 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:07.430 18:15:25 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:07.430 18:15:25 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:07.430 18:15:25 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:07.430 18:15:25 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:07.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.430 --rc genhtml_branch_coverage=1 00:06:07.430 --rc genhtml_function_coverage=1 00:06:07.430 --rc genhtml_legend=1 00:06:07.430 --rc geninfo_all_blocks=1 00:06:07.430 --rc geninfo_unexecuted_blocks=1 00:06:07.430 00:06:07.430 ' 00:06:07.430 18:15:25 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:07.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.430 --rc genhtml_branch_coverage=1 00:06:07.430 --rc genhtml_function_coverage=1 00:06:07.430 --rc genhtml_legend=1 00:06:07.430 --rc geninfo_all_blocks=1 00:06:07.430 --rc geninfo_unexecuted_blocks=1 00:06:07.430 00:06:07.430 ' 00:06:07.430 18:15:25 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:07.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.430 --rc genhtml_branch_coverage=1 00:06:07.430 --rc genhtml_function_coverage=1 00:06:07.430 --rc genhtml_legend=1 00:06:07.430 --rc geninfo_all_blocks=1 00:06:07.430 --rc geninfo_unexecuted_blocks=1 00:06:07.430 00:06:07.430 ' 00:06:07.430 18:15:25 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:07.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.430 --rc genhtml_branch_coverage=1 00:06:07.430 --rc genhtml_function_coverage=1 00:06:07.430 --rc genhtml_legend=1 00:06:07.430 --rc geninfo_all_blocks=1 00:06:07.430 --rc geninfo_unexecuted_blocks=1 00:06:07.430 00:06:07.430 ' 00:06:07.430 18:15:25 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:07.430 18:15:25 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59557 00:06:07.430 18:15:25 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59557 00:06:07.430 18:15:25 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59557 ']' 00:06:07.430 18:15:25 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.430 18:15:25 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:07.430 18:15:25 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.430 18:15:25 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:07.430 18:15:25 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:07.430 18:15:25 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:07.430 [2024-11-20 18:15:26.031994] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:06:07.430 [2024-11-20 18:15:26.032447] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59557 ] 00:06:07.689 [2024-11-20 18:15:26.188699] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.689 [2024-11-20 18:15:26.265930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.255 18:15:26 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:08.255 18:15:26 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:08.255 18:15:26 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:08.513 { 00:06:08.513 "version": "SPDK v25.01-pre git sha1 557f022f6", 00:06:08.513 "fields": { 00:06:08.513 "major": 25, 00:06:08.513 "minor": 1, 00:06:08.513 "patch": 0, 00:06:08.513 "suffix": "-pre", 00:06:08.513 "commit": "557f022f6" 00:06:08.513 } 00:06:08.513 } 00:06:08.513 18:15:26 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:08.513 18:15:26 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:08.513 18:15:26 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:08.513 18:15:26 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:08.513 18:15:26 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:08.513 18:15:26 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:08.513 18:15:26 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:08.513 18:15:26 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:08.513 18:15:26 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:08.513 18:15:26 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:08.513 18:15:26 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:08.513 18:15:26 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:08.513 18:15:26 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:08.513 18:15:26 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:08.513 18:15:26 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:08.513 18:15:26 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:08.513 18:15:26 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:08.513 18:15:26 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:08.513 18:15:26 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:08.513 18:15:27 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:08.513 18:15:26 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:08.513 18:15:27 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:08.513 18:15:27 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:08.513 18:15:27 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:08.772 request: 00:06:08.772 { 00:06:08.772 "method": "env_dpdk_get_mem_stats", 00:06:08.772 "req_id": 1 00:06:08.772 } 00:06:08.772 Got JSON-RPC error response 00:06:08.772 response: 00:06:08.772 { 00:06:08.772 "code": -32601, 00:06:08.772 "message": "Method not found" 00:06:08.772 } 00:06:08.772 18:15:27 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:08.772 18:15:27 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:08.772 18:15:27 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:08.772 18:15:27 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:08.772 18:15:27 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59557 00:06:08.772 18:15:27 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59557 ']' 00:06:08.772 18:15:27 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59557 00:06:08.772 18:15:27 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:08.772 18:15:27 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:08.772 18:15:27 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59557 00:06:08.772 killing process with pid 59557 00:06:08.772 18:15:27 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:08.772 18:15:27 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:08.772 18:15:27 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59557' 00:06:08.772 18:15:27 app_cmdline -- common/autotest_common.sh@973 -- # kill 59557 00:06:08.772 18:15:27 app_cmdline -- common/autotest_common.sh@978 -- # wait 59557 00:06:10.149 ************************************ 00:06:10.149 END TEST app_cmdline 00:06:10.149 ************************************ 00:06:10.149 00:06:10.149 real 0m2.548s 00:06:10.149 user 0m2.804s 00:06:10.149 sys 0m0.374s 00:06:10.149 18:15:28 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.149 18:15:28 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:10.149 18:15:28 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:10.149 18:15:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:10.149 18:15:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.149 18:15:28 -- common/autotest_common.sh@10 -- # set +x 00:06:10.149 ************************************ 00:06:10.149 START TEST version 00:06:10.149 ************************************ 00:06:10.149 18:15:28 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:10.149 * Looking for test storage... 00:06:10.149 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:10.149 18:15:28 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:10.149 18:15:28 version -- common/autotest_common.sh@1693 -- # lcov --version 00:06:10.149 18:15:28 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:10.149 18:15:28 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:10.149 18:15:28 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:10.149 18:15:28 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:10.149 18:15:28 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:10.149 18:15:28 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:10.149 18:15:28 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:10.149 18:15:28 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:10.149 18:15:28 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:10.149 18:15:28 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:10.149 18:15:28 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:10.149 18:15:28 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:10.149 18:15:28 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:10.149 18:15:28 version -- scripts/common.sh@344 -- # case "$op" in 00:06:10.149 18:15:28 version -- scripts/common.sh@345 -- # : 1 00:06:10.149 18:15:28 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:10.149 18:15:28 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:10.149 18:15:28 version -- scripts/common.sh@365 -- # decimal 1 00:06:10.149 18:15:28 version -- scripts/common.sh@353 -- # local d=1 00:06:10.149 18:15:28 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:10.149 18:15:28 version -- scripts/common.sh@355 -- # echo 1 00:06:10.149 18:15:28 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:10.149 18:15:28 version -- scripts/common.sh@366 -- # decimal 2 00:06:10.149 18:15:28 version -- scripts/common.sh@353 -- # local d=2 00:06:10.149 18:15:28 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:10.149 18:15:28 version -- scripts/common.sh@355 -- # echo 2 00:06:10.149 18:15:28 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:10.149 18:15:28 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:10.149 18:15:28 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:10.149 18:15:28 version -- scripts/common.sh@368 -- # return 0 00:06:10.149 18:15:28 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:10.149 18:15:28 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:10.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.149 --rc genhtml_branch_coverage=1 00:06:10.149 --rc genhtml_function_coverage=1 00:06:10.149 --rc genhtml_legend=1 00:06:10.149 --rc geninfo_all_blocks=1 00:06:10.149 --rc geninfo_unexecuted_blocks=1 00:06:10.149 00:06:10.149 ' 00:06:10.149 18:15:28 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:10.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.149 --rc genhtml_branch_coverage=1 00:06:10.149 --rc genhtml_function_coverage=1 00:06:10.150 --rc genhtml_legend=1 00:06:10.150 --rc geninfo_all_blocks=1 00:06:10.150 --rc geninfo_unexecuted_blocks=1 00:06:10.150 00:06:10.150 ' 00:06:10.150 18:15:28 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:10.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.150 --rc genhtml_branch_coverage=1 00:06:10.150 --rc genhtml_function_coverage=1 00:06:10.150 --rc genhtml_legend=1 00:06:10.150 --rc geninfo_all_blocks=1 00:06:10.150 --rc geninfo_unexecuted_blocks=1 00:06:10.150 00:06:10.150 ' 00:06:10.150 18:15:28 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:10.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.150 --rc genhtml_branch_coverage=1 00:06:10.150 --rc genhtml_function_coverage=1 00:06:10.150 --rc genhtml_legend=1 00:06:10.150 --rc geninfo_all_blocks=1 00:06:10.150 --rc geninfo_unexecuted_blocks=1 00:06:10.150 00:06:10.150 ' 00:06:10.150 18:15:28 version -- app/version.sh@17 -- # get_header_version major 00:06:10.150 18:15:28 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:10.150 18:15:28 version -- app/version.sh@14 -- # tr -d '"' 00:06:10.150 18:15:28 version -- app/version.sh@14 -- # cut -f2 00:06:10.150 18:15:28 version -- app/version.sh@17 -- # major=25 00:06:10.150 18:15:28 version -- app/version.sh@18 -- # get_header_version minor 00:06:10.150 18:15:28 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:10.150 18:15:28 version -- app/version.sh@14 -- # cut -f2 00:06:10.150 18:15:28 version -- app/version.sh@14 -- # tr -d '"' 00:06:10.150 18:15:28 version -- app/version.sh@18 -- # minor=1 00:06:10.150 18:15:28 version -- app/version.sh@19 -- # get_header_version patch 00:06:10.150 18:15:28 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:10.150 18:15:28 version -- app/version.sh@14 -- # cut -f2 00:06:10.150 18:15:28 version -- app/version.sh@14 -- # tr -d '"' 00:06:10.150 18:15:28 version -- app/version.sh@19 -- # patch=0 00:06:10.150 18:15:28 version -- app/version.sh@20 -- # get_header_version suffix 00:06:10.150 18:15:28 version -- app/version.sh@14 -- # cut -f2 00:06:10.150 18:15:28 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:10.150 18:15:28 version -- app/version.sh@14 -- # tr -d '"' 00:06:10.150 18:15:28 version -- app/version.sh@20 -- # suffix=-pre 00:06:10.150 18:15:28 version -- app/version.sh@22 -- # version=25.1 00:06:10.150 18:15:28 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:10.150 18:15:28 version -- app/version.sh@28 -- # version=25.1rc0 00:06:10.150 18:15:28 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:10.150 18:15:28 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:10.150 18:15:28 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:10.150 18:15:28 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:10.150 00:06:10.150 real 0m0.199s 00:06:10.150 user 0m0.126s 00:06:10.150 sys 0m0.098s 00:06:10.150 18:15:28 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.150 18:15:28 version -- common/autotest_common.sh@10 -- # set +x 00:06:10.150 ************************************ 00:06:10.150 END TEST version 00:06:10.150 ************************************ 00:06:10.150 18:15:28 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:10.150 18:15:28 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:10.150 18:15:28 -- spdk/autotest.sh@194 -- # uname -s 00:06:10.150 18:15:28 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:10.150 18:15:28 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:10.150 18:15:28 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:10.150 18:15:28 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:06:10.150 18:15:28 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:10.150 18:15:28 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:10.150 18:15:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.150 18:15:28 -- common/autotest_common.sh@10 -- # set +x 00:06:10.150 ************************************ 00:06:10.150 START TEST blockdev_nvme 00:06:10.150 ************************************ 00:06:10.150 18:15:28 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:10.150 * Looking for test storage... 00:06:10.150 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:10.150 18:15:28 blockdev_nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:10.150 18:15:28 blockdev_nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:06:10.150 18:15:28 blockdev_nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:10.411 18:15:28 blockdev_nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:10.411 18:15:28 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:10.411 18:15:28 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:10.411 18:15:28 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:10.411 18:15:28 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:06:10.411 18:15:28 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:06:10.411 18:15:28 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:06:10.411 18:15:28 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:06:10.411 18:15:28 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:06:10.411 18:15:28 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:06:10.411 18:15:28 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:06:10.411 18:15:28 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:10.411 18:15:28 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:06:10.411 18:15:28 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:06:10.411 18:15:28 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:10.411 18:15:28 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:10.411 18:15:28 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:06:10.411 18:15:28 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:06:10.411 18:15:28 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:10.411 18:15:28 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:06:10.411 18:15:28 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:06:10.411 18:15:28 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:06:10.411 18:15:28 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:06:10.411 18:15:28 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:10.411 18:15:28 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:06:10.411 18:15:28 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:06:10.411 18:15:28 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:10.412 18:15:28 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:10.412 18:15:28 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:06:10.412 18:15:28 blockdev_nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:10.412 18:15:28 blockdev_nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:10.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.412 --rc genhtml_branch_coverage=1 00:06:10.412 --rc genhtml_function_coverage=1 00:06:10.412 --rc genhtml_legend=1 00:06:10.412 --rc geninfo_all_blocks=1 00:06:10.412 --rc geninfo_unexecuted_blocks=1 00:06:10.412 00:06:10.412 ' 00:06:10.412 18:15:28 blockdev_nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:10.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.412 --rc genhtml_branch_coverage=1 00:06:10.412 --rc genhtml_function_coverage=1 00:06:10.412 --rc genhtml_legend=1 00:06:10.412 --rc geninfo_all_blocks=1 00:06:10.412 --rc geninfo_unexecuted_blocks=1 00:06:10.412 00:06:10.412 ' 00:06:10.412 18:15:28 blockdev_nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:10.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.412 --rc genhtml_branch_coverage=1 00:06:10.412 --rc genhtml_function_coverage=1 00:06:10.412 --rc genhtml_legend=1 00:06:10.412 --rc geninfo_all_blocks=1 00:06:10.412 --rc geninfo_unexecuted_blocks=1 00:06:10.412 00:06:10.412 ' 00:06:10.412 18:15:28 blockdev_nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:10.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.412 --rc genhtml_branch_coverage=1 00:06:10.412 --rc genhtml_function_coverage=1 00:06:10.412 --rc genhtml_legend=1 00:06:10.412 --rc geninfo_all_blocks=1 00:06:10.412 --rc geninfo_unexecuted_blocks=1 00:06:10.412 00:06:10.412 ' 00:06:10.412 18:15:28 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:10.412 18:15:28 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:06:10.412 18:15:28 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:10.412 18:15:28 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:10.412 18:15:28 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:10.412 18:15:28 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:10.412 18:15:28 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:10.412 18:15:28 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:10.412 18:15:28 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:06:10.412 18:15:28 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:06:10.412 18:15:28 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:06:10.412 18:15:28 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:06:10.412 18:15:28 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:06:10.412 18:15:28 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:06:10.412 18:15:28 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:06:10.412 18:15:28 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:06:10.412 18:15:28 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:06:10.412 18:15:28 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:06:10.412 18:15:28 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:06:10.412 18:15:28 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:06:10.412 18:15:28 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:06:10.412 18:15:28 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:06:10.412 18:15:28 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:06:10.412 18:15:28 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:06:10.412 18:15:28 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=59729 00:06:10.412 18:15:28 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:10.412 18:15:28 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 59729 00:06:10.412 18:15:28 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 59729 ']' 00:06:10.412 18:15:28 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:10.412 18:15:28 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.412 18:15:28 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:10.412 18:15:28 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.412 18:15:28 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:10.412 18:15:28 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:10.412 [2024-11-20 18:15:28.882346] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:06:10.412 [2024-11-20 18:15:28.882619] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59729 ] 00:06:10.673 [2024-11-20 18:15:29.041862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.673 [2024-11-20 18:15:29.138989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.242 18:15:29 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:11.242 18:15:29 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:06:11.242 18:15:29 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:06:11.242 18:15:29 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:06:11.242 18:15:29 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:06:11.242 18:15:29 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:11.242 18:15:29 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:11.242 18:15:29 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:11.242 18:15:29 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:11.242 18:15:29 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:11.502 18:15:30 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:11.502 18:15:30 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:06:11.502 18:15:30 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:11.502 18:15:30 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:11.502 18:15:30 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:11.502 18:15:30 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:06:11.502 18:15:30 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:06:11.502 18:15:30 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:11.502 18:15:30 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:11.502 18:15:30 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:11.502 18:15:30 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:06:11.502 18:15:30 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:11.502 18:15:30 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:11.502 18:15:30 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:11.502 18:15:30 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:11.502 18:15:30 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:11.502 18:15:30 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:11.502 18:15:30 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:11.502 18:15:30 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:06:11.764 18:15:30 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:06:11.764 18:15:30 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:06:11.764 18:15:30 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:11.764 18:15:30 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:11.764 18:15:30 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:11.764 18:15:30 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:06:11.765 18:15:30 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "ac6e8b27-edbf-4ba7-a71b-3fd7f9d3fd5b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "ac6e8b27-edbf-4ba7-a71b-3fd7f9d3fd5b",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "9113aaa7-a033-4361-ba23-d7f85938a840"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "9113aaa7-a033-4361-ba23-d7f85938a840",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "89a398c7-c718-432f-9ad3-9a5ac7dea183"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "89a398c7-c718-432f-9ad3-9a5ac7dea183",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "02217fc3-18f1-46c9-988e-8a5938515723"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "02217fc3-18f1-46c9-988e-8a5938515723",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "ad816603-0008-4ca3-a025-905d98f7984f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ad816603-0008-4ca3-a025-905d98f7984f",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "25b3d997-580e-4831-bace-5c8c42f4cdea"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "25b3d997-580e-4831-bace-5c8c42f4cdea",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:11.765 18:15:30 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:06:11.765 18:15:30 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:06:11.765 18:15:30 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:06:11.765 18:15:30 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:06:11.765 18:15:30 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 59729 00:06:11.765 18:15:30 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 59729 ']' 00:06:11.765 18:15:30 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 59729 00:06:11.765 18:15:30 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:06:11.765 18:15:30 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:11.765 18:15:30 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59729 00:06:11.765 killing process with pid 59729 00:06:11.765 18:15:30 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:11.765 18:15:30 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:11.765 18:15:30 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59729' 00:06:11.765 18:15:30 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 59729 00:06:11.765 18:15:30 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 59729 00:06:13.150 18:15:31 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:13.150 18:15:31 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:13.150 18:15:31 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:06:13.150 18:15:31 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:13.150 18:15:31 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:13.150 ************************************ 00:06:13.150 START TEST bdev_hello_world 00:06:13.150 ************************************ 00:06:13.150 18:15:31 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:13.150 [2024-11-20 18:15:31.765226] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:06:13.150 [2024-11-20 18:15:31.765461] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59808 ] 00:06:13.414 [2024-11-20 18:15:31.924599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.415 [2024-11-20 18:15:32.020841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.989 [2024-11-20 18:15:32.557680] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:13.989 [2024-11-20 18:15:32.557843] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:13.989 [2024-11-20 18:15:32.557868] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:13.989 [2024-11-20 18:15:32.560256] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:13.989 [2024-11-20 18:15:32.561018] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:13.989 [2024-11-20 18:15:32.561124] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:13.989 [2024-11-20 18:15:32.561767] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:13.989 00:06:13.989 [2024-11-20 18:15:32.561795] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:14.932 00:06:14.932 real 0m1.585s 00:06:14.932 user 0m1.307s 00:06:14.932 sys 0m0.172s 00:06:14.932 18:15:33 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:14.932 ************************************ 00:06:14.932 END TEST bdev_hello_world 00:06:14.932 ************************************ 00:06:14.932 18:15:33 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:14.932 18:15:33 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:06:14.932 18:15:33 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:14.932 18:15:33 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.932 18:15:33 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:14.932 ************************************ 00:06:14.932 START TEST bdev_bounds 00:06:14.932 ************************************ 00:06:14.932 18:15:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:06:14.932 Process bdevio pid: 59850 00:06:14.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.932 18:15:33 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=59850 00:06:14.932 18:15:33 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:14.932 18:15:33 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 59850' 00:06:14.932 18:15:33 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 59850 00:06:14.932 18:15:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 59850 ']' 00:06:14.932 18:15:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.932 18:15:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:14.932 18:15:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.932 18:15:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:14.932 18:15:33 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:14.932 18:15:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:14.932 [2024-11-20 18:15:33.402214] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:06:14.932 [2024-11-20 18:15:33.402330] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59850 ] 00:06:15.194 [2024-11-20 18:15:33.561412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:15.194 [2024-11-20 18:15:33.659555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.194 [2024-11-20 18:15:33.659768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:15.194 [2024-11-20 18:15:33.659791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.764 18:15:34 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:15.765 18:15:34 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:06:15.765 18:15:34 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:15.765 I/O targets: 00:06:15.765 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:15.765 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:06:15.765 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:15.765 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:15.765 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:15.765 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:15.765 00:06:15.765 00:06:15.765 CUnit - A unit testing framework for C - Version 2.1-3 00:06:15.765 http://cunit.sourceforge.net/ 00:06:15.765 00:06:15.765 00:06:15.765 Suite: bdevio tests on: Nvme3n1 00:06:15.765 Test: blockdev write read block ...passed 00:06:15.765 Test: blockdev write zeroes read block ...passed 00:06:15.765 Test: blockdev write zeroes read no split ...passed 00:06:15.765 Test: blockdev write zeroes read split ...passed 00:06:15.765 Test: blockdev write zeroes read split partial ...passed 00:06:15.765 Test: blockdev reset ...[2024-11-20 18:15:34.381898] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:06:15.765 [2024-11-20 18:15:34.385955] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller spassed 00:06:15.765 Test: blockdev write read 8 blocks ...uccessful. 00:06:15.765 passed 00:06:15.765 Test: blockdev write read size > 128k ...passed 00:06:15.765 Test: blockdev write read invalid size ...passed 00:06:15.765 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:15.765 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:15.765 Test: blockdev write read max offset ...passed 00:06:16.026 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:16.026 Test: blockdev writev readv 8 blocks ...passed 00:06:16.026 Test: blockdev writev readv 30 x 1block ...passed 00:06:16.026 Test: blockdev writev readv block ...passed 00:06:16.026 Test: blockdev writev readv size > 128k ...passed 00:06:16.026 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:16.026 Test: blockdev comparev and writev ...[2024-11-20 18:15:34.397689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ad80a000 len:0x1000 00:06:16.026 [2024-11-20 18:15:34.397732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:16.026 passed 00:06:16.026 Test: blockdev nvme passthru rw ...passed 00:06:16.026 Test: blockdev nvme passthru vendor specific ...[2024-11-20 18:15:34.399224] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:16.026 [2024-11-20 18:15:34.399252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:16.026 passed 00:06:16.026 Test: blockdev nvme admin passthru ...passed 00:06:16.026 Test: blockdev copy ...passed 00:06:16.026 Suite: bdevio tests on: Nvme2n3 00:06:16.026 Test: blockdev write read block ...passed 00:06:16.026 Test: blockdev write zeroes read block ...passed 00:06:16.026 Test: blockdev write zeroes read no split ...passed 00:06:16.026 Test: blockdev write zeroes read split ...passed 00:06:16.026 Test: blockdev write zeroes read split partial ...passed 00:06:16.026 Test: blockdev reset ...[2024-11-20 18:15:34.451523] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:16.026 [2024-11-20 18:15:34.458236] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:16.026 passed 00:06:16.026 Test: blockdev write read 8 blocks ...passed 00:06:16.026 Test: blockdev write read size > 128k ...passed 00:06:16.026 Test: blockdev write read invalid size ...passed 00:06:16.026 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:16.026 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:16.026 Test: blockdev write read max offset ...passed 00:06:16.026 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:16.026 Test: blockdev writev readv 8 blocks ...passed 00:06:16.026 Test: blockdev writev readv 30 x 1block ...passed 00:06:16.026 Test: blockdev writev readv block ...passed 00:06:16.026 Test: blockdev writev readv size > 128k ...passed 00:06:16.026 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:16.026 Test: blockdev comparev and writev ...[2024-11-20 18:15:34.476999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x291206000 len:0x1000 00:06:16.026 [2024-11-20 18:15:34.477038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:16.026 passed 00:06:16.026 Test: blockdev nvme passthru rw ...passed 00:06:16.026 Test: blockdev nvme passthru vendor specific ...passed 00:06:16.026 Test: blockdev nvme admin passthru ...[2024-11-20 18:15:34.479152] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:16.026 [2024-11-20 18:15:34.479181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:16.026 passed 00:06:16.026 Test: blockdev copy ...passed 00:06:16.026 Suite: bdevio tests on: Nvme2n2 00:06:16.026 Test: blockdev write read block ...passed 00:06:16.026 Test: blockdev write zeroes read block ...passed 00:06:16.026 Test: blockdev write zeroes read no split ...passed 00:06:16.027 Test: blockdev write zeroes read split ...passed 00:06:16.027 Test: blockdev write zeroes read split partial ...passed 00:06:16.027 Test: blockdev reset ...[2024-11-20 18:15:34.533928] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:16.027 [2024-11-20 18:15:34.539218] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:06:16.027 Test: blockdev write read 8 blocks ...uccessful. 00:06:16.027 passed 00:06:16.027 Test: blockdev write read size > 128k ...passed 00:06:16.027 Test: blockdev write read invalid size ...passed 00:06:16.027 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:16.027 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:16.027 Test: blockdev write read max offset ...passed 00:06:16.027 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:16.027 Test: blockdev writev readv 8 blocks ...passed 00:06:16.027 Test: blockdev writev readv 30 x 1block ...passed 00:06:16.027 Test: blockdev writev readv block ...passed 00:06:16.027 Test: blockdev writev readv size > 128k ...passed 00:06:16.027 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:16.027 Test: blockdev comparev and writev ...[2024-11-20 18:15:34.558553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ccc3c000 len:0x1000 00:06:16.027 [2024-11-20 18:15:34.558677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:16.027 passed 00:06:16.027 Test: blockdev nvme passthru rw ...passed 00:06:16.027 Test: blockdev nvme passthru vendor specific ...passed 00:06:16.027 Test: blockdev nvme admin passthru ...[2024-11-20 18:15:34.560964] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:16.027 [2024-11-20 18:15:34.560998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:16.027 passed 00:06:16.027 Test: blockdev copy ...passed 00:06:16.027 Suite: bdevio tests on: Nvme2n1 00:06:16.027 Test: blockdev write read block ...passed 00:06:16.027 Test: blockdev write zeroes read block ...passed 00:06:16.027 Test: blockdev write zeroes read no split ...passed 00:06:16.027 Test: blockdev write zeroes read split ...passed 00:06:16.027 Test: blockdev write zeroes read split partial ...passed 00:06:16.027 Test: blockdev reset ...[2024-11-20 18:15:34.614542] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:16.027 [2024-11-20 18:15:34.618536] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:06:16.027 Test: blockdev write read 8 blocks ...uccessful. 00:06:16.027 passed 00:06:16.027 Test: blockdev write read size > 128k ...passed 00:06:16.027 Test: blockdev write read invalid size ...passed 00:06:16.027 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:16.027 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:16.027 Test: blockdev write read max offset ...passed 00:06:16.027 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:16.027 Test: blockdev writev readv 8 blocks ...passed 00:06:16.027 Test: blockdev writev readv 30 x 1block ...passed 00:06:16.027 Test: blockdev writev readv block ...passed 00:06:16.027 Test: blockdev writev readv size > 128k ...passed 00:06:16.027 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:16.027 Test: blockdev comparev and writev ...[2024-11-20 18:15:34.636631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ccc38000 len:0x1000 00:06:16.027 [2024-11-20 18:15:34.636763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:16.027 passed 00:06:16.027 Test: blockdev nvme passthru rw ...passed 00:06:16.027 Test: blockdev nvme passthru vendor specific ...passed 00:06:16.027 Test: blockdev nvme admin passthru ...[2024-11-20 18:15:34.639197] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:16.027 [2024-11-20 18:15:34.639230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:16.027 passed 00:06:16.027 Test: blockdev copy ...passed 00:06:16.027 Suite: bdevio tests on: Nvme1n1 00:06:16.027 Test: blockdev write read block ...passed 00:06:16.027 Test: blockdev write zeroes read block ...passed 00:06:16.288 Test: blockdev write zeroes read no split ...passed 00:06:16.288 Test: blockdev write zeroes read split ...passed 00:06:16.288 Test: blockdev write zeroes read split partial ...passed 00:06:16.288 Test: blockdev reset ...[2024-11-20 18:15:34.695927] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:16.288 [2024-11-20 18:15:34.699531] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller spassed 00:06:16.288 Test: blockdev write read 8 blocks ...uccessful. 00:06:16.288 passed 00:06:16.288 Test: blockdev write read size > 128k ...passed 00:06:16.288 Test: blockdev write read invalid size ...passed 00:06:16.288 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:16.288 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:16.288 Test: blockdev write read max offset ...passed 00:06:16.288 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:16.288 Test: blockdev writev readv 8 blocks ...passed 00:06:16.288 Test: blockdev writev readv 30 x 1block ...passed 00:06:16.288 Test: blockdev writev readv block ...passed 00:06:16.288 Test: blockdev writev readv size > 128k ...passed 00:06:16.288 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:16.288 Test: blockdev comparev and writev ...[2024-11-20 18:15:34.717212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ccc34000 len:0x1000 00:06:16.288 [2024-11-20 18:15:34.717258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:16.288 passed 00:06:16.288 Test: blockdev nvme passthru rw ...passed 00:06:16.288 Test: blockdev nvme passthru vendor specific ...passed 00:06:16.288 Test: blockdev nvme admin passthru ...[2024-11-20 18:15:34.719236] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:16.288 [2024-11-20 18:15:34.719269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:16.288 passed 00:06:16.288 Test: blockdev copy ...passed 00:06:16.288 Suite: bdevio tests on: Nvme0n1 00:06:16.288 Test: blockdev write read block ...passed 00:06:16.288 Test: blockdev write zeroes read block ...passed 00:06:16.288 Test: blockdev write zeroes read no split ...passed 00:06:16.288 Test: blockdev write zeroes read split ...passed 00:06:16.288 Test: blockdev write zeroes read split partial ...passed 00:06:16.288 Test: blockdev reset ...[2024-11-20 18:15:34.777699] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:06:16.288 [2024-11-20 18:15:34.783016] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller spassed 00:06:16.288 Test: blockdev write read 8 blocks ...uccessful. 00:06:16.288 passed 00:06:16.288 Test: blockdev write read size > 128k ...passed 00:06:16.288 Test: blockdev write read invalid size ...passed 00:06:16.288 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:16.288 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:16.288 Test: blockdev write read max offset ...passed 00:06:16.288 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:16.288 Test: blockdev writev readv 8 blocks ...passed 00:06:16.288 Test: blockdev writev readv 30 x 1block ...passed 00:06:16.288 Test: blockdev writev readv block ...passed 00:06:16.288 Test: blockdev writev readv size > 128k ...passed 00:06:16.288 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:16.288 Test: blockdev comparev and writev ...passed 00:06:16.288 Test: blockdev nvme passthru rw ...[2024-11-20 18:15:34.798404] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:16.288 separate metadata which is not supported yet. 00:06:16.288 passed 00:06:16.288 Test: blockdev nvme passthru vendor specific ...passed 00:06:16.288 Test: blockdev nvme admin passthru ...[2024-11-20 18:15:34.799861] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:16.288 [2024-11-20 18:15:34.799903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:16.288 passed 00:06:16.288 Test: blockdev copy ...passed 00:06:16.288 00:06:16.288 Run Summary: Type Total Ran Passed Failed Inactive 00:06:16.288 suites 6 6 n/a 0 0 00:06:16.288 tests 138 138 138 0 0 00:06:16.288 asserts 893 893 893 0 n/a 00:06:16.288 00:06:16.288 Elapsed time = 1.192 seconds 00:06:16.288 0 00:06:16.288 18:15:34 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 59850 00:06:16.288 18:15:34 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 59850 ']' 00:06:16.288 18:15:34 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 59850 00:06:16.288 18:15:34 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:06:16.288 18:15:34 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:16.288 18:15:34 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59850 00:06:16.288 18:15:34 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:16.288 18:15:34 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:16.288 18:15:34 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59850' 00:06:16.288 killing process with pid 59850 00:06:16.288 18:15:34 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 59850 00:06:16.288 18:15:34 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 59850 00:06:17.232 18:15:35 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:17.232 00:06:17.232 real 0m2.174s 00:06:17.232 user 0m5.527s 00:06:17.232 sys 0m0.252s 00:06:17.232 18:15:35 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:17.232 18:15:35 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:17.232 ************************************ 00:06:17.232 END TEST bdev_bounds 00:06:17.232 ************************************ 00:06:17.232 18:15:35 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:17.232 18:15:35 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:17.232 18:15:35 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:17.232 18:15:35 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:17.232 ************************************ 00:06:17.232 START TEST bdev_nbd 00:06:17.232 ************************************ 00:06:17.232 18:15:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:17.232 18:15:35 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:17.232 18:15:35 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:17.232 18:15:35 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.232 18:15:35 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:17.232 18:15:35 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:17.232 18:15:35 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:17.232 18:15:35 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:06:17.232 18:15:35 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:17.233 18:15:35 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:17.233 18:15:35 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:17.233 18:15:35 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:06:17.233 18:15:35 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:17.233 18:15:35 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:17.233 18:15:35 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:17.233 18:15:35 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:17.233 18:15:35 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=59904 00:06:17.233 18:15:35 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:17.233 18:15:35 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 59904 /var/tmp/spdk-nbd.sock 00:06:17.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:17.233 18:15:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 59904 ']' 00:06:17.233 18:15:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:17.233 18:15:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:17.233 18:15:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:17.233 18:15:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:17.233 18:15:35 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:17.233 18:15:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:17.233 [2024-11-20 18:15:35.647935] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:06:17.233 [2024-11-20 18:15:35.648214] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:17.233 [2024-11-20 18:15:35.812772] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.494 [2024-11-20 18:15:35.920380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.066 18:15:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:18.066 18:15:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:06:18.066 18:15:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:18.066 18:15:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.066 18:15:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:18.066 18:15:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:18.066 18:15:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:18.066 18:15:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.066 18:15:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:18.066 18:15:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:18.066 18:15:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:18.066 18:15:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:18.066 18:15:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:18.066 18:15:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:18.066 18:15:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:18.327 18:15:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:18.327 18:15:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:18.327 18:15:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:18.327 18:15:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:18.327 18:15:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:18.327 18:15:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:18.327 18:15:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:18.327 18:15:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:18.327 18:15:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:18.327 18:15:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:18.327 18:15:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:18.327 18:15:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:18.327 1+0 records in 00:06:18.327 1+0 records out 00:06:18.327 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000757006 s, 5.4 MB/s 00:06:18.327 18:15:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.327 18:15:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:18.327 18:15:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.327 18:15:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:18.327 18:15:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:18.327 18:15:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:18.328 18:15:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:18.328 18:15:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:06:18.589 18:15:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:18.589 18:15:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:18.589 18:15:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:18.589 18:15:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:18.589 18:15:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:18.589 18:15:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:18.589 18:15:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:18.589 18:15:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:18.589 18:15:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:18.589 18:15:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:18.589 18:15:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:18.589 18:15:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:18.589 1+0 records in 00:06:18.589 1+0 records out 00:06:18.589 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000744822 s, 5.5 MB/s 00:06:18.589 18:15:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.589 18:15:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:18.589 18:15:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.589 18:15:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:18.589 18:15:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:18.589 18:15:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:18.589 18:15:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:18.589 18:15:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:18.589 18:15:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:18.589 18:15:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:18.589 18:15:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:18.589 18:15:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:06:18.589 18:15:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:18.589 18:15:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:18.589 18:15:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:18.589 18:15:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:06:18.589 18:15:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:18.589 18:15:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:18.589 18:15:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:18.589 18:15:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:18.589 1+0 records in 00:06:18.589 1+0 records out 00:06:18.589 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00084556 s, 4.8 MB/s 00:06:18.589 18:15:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.589 18:15:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:18.589 18:15:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.589 18:15:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:18.589 18:15:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:18.589 18:15:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:18.589 18:15:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:18.848 18:15:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:18.848 18:15:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:18.848 18:15:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:18.848 18:15:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:18.848 18:15:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:06:18.848 18:15:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:18.849 18:15:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:18.849 18:15:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:18.849 18:15:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:06:18.849 18:15:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:18.849 18:15:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:18.849 18:15:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:18.849 18:15:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:18.849 1+0 records in 00:06:18.849 1+0 records out 00:06:18.849 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000348669 s, 11.7 MB/s 00:06:18.849 18:15:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.849 18:15:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:18.849 18:15:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.849 18:15:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:18.849 18:15:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:18.849 18:15:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:18.849 18:15:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:18.849 18:15:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:19.108 18:15:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:19.108 18:15:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:19.108 18:15:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:19.108 18:15:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:06:19.108 18:15:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:19.108 18:15:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:19.108 18:15:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:19.108 18:15:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:06:19.108 18:15:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:19.108 18:15:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:19.108 18:15:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:19.108 18:15:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:19.108 1+0 records in 00:06:19.108 1+0 records out 00:06:19.108 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000411435 s, 10.0 MB/s 00:06:19.108 18:15:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:19.108 18:15:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:19.108 18:15:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:19.108 18:15:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:19.108 18:15:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:19.108 18:15:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:19.108 18:15:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:19.108 18:15:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:19.366 18:15:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:19.366 18:15:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:19.366 18:15:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:19.366 18:15:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:06:19.366 18:15:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:19.366 18:15:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:19.366 18:15:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:19.366 18:15:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:06:19.366 18:15:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:19.366 18:15:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:19.366 18:15:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:19.366 18:15:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:19.366 1+0 records in 00:06:19.366 1+0 records out 00:06:19.366 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000562713 s, 7.3 MB/s 00:06:19.366 18:15:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:19.366 18:15:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:19.366 18:15:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:19.366 18:15:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:19.366 18:15:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:19.366 18:15:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:19.366 18:15:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:19.366 18:15:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:19.625 18:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:19.625 { 00:06:19.625 "nbd_device": "/dev/nbd0", 00:06:19.625 "bdev_name": "Nvme0n1" 00:06:19.625 }, 00:06:19.625 { 00:06:19.625 "nbd_device": "/dev/nbd1", 00:06:19.625 "bdev_name": "Nvme1n1" 00:06:19.625 }, 00:06:19.626 { 00:06:19.626 "nbd_device": "/dev/nbd2", 00:06:19.626 "bdev_name": "Nvme2n1" 00:06:19.626 }, 00:06:19.626 { 00:06:19.626 "nbd_device": "/dev/nbd3", 00:06:19.626 "bdev_name": "Nvme2n2" 00:06:19.626 }, 00:06:19.626 { 00:06:19.626 "nbd_device": "/dev/nbd4", 00:06:19.626 "bdev_name": "Nvme2n3" 00:06:19.626 }, 00:06:19.626 { 00:06:19.626 "nbd_device": "/dev/nbd5", 00:06:19.626 "bdev_name": "Nvme3n1" 00:06:19.626 } 00:06:19.626 ]' 00:06:19.626 18:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:19.626 18:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:19.626 { 00:06:19.626 "nbd_device": "/dev/nbd0", 00:06:19.626 "bdev_name": "Nvme0n1" 00:06:19.626 }, 00:06:19.626 { 00:06:19.626 "nbd_device": "/dev/nbd1", 00:06:19.626 "bdev_name": "Nvme1n1" 00:06:19.626 }, 00:06:19.626 { 00:06:19.626 "nbd_device": "/dev/nbd2", 00:06:19.626 "bdev_name": "Nvme2n1" 00:06:19.626 }, 00:06:19.626 { 00:06:19.626 "nbd_device": "/dev/nbd3", 00:06:19.626 "bdev_name": "Nvme2n2" 00:06:19.626 }, 00:06:19.626 { 00:06:19.626 "nbd_device": "/dev/nbd4", 00:06:19.626 "bdev_name": "Nvme2n3" 00:06:19.626 }, 00:06:19.626 { 00:06:19.626 "nbd_device": "/dev/nbd5", 00:06:19.626 "bdev_name": "Nvme3n1" 00:06:19.626 } 00:06:19.626 ]' 00:06:19.626 18:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:19.626 18:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:06:19.626 18:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.626 18:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:06:19.626 18:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:19.626 18:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:19.626 18:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:19.626 18:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:19.884 18:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:19.884 18:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:19.884 18:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:19.884 18:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:19.884 18:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:19.884 18:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:19.884 18:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:19.884 18:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:19.884 18:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:19.884 18:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:20.142 18:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:20.142 18:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:20.142 18:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:20.142 18:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.142 18:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.142 18:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:20.142 18:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:20.142 18:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.142 18:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:20.142 18:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:20.400 18:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:20.400 18:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:20.400 18:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:20.400 18:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.400 18:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.400 18:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:20.400 18:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:20.400 18:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.400 18:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:20.400 18:15:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:20.400 18:15:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:20.400 18:15:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:20.400 18:15:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:20.400 18:15:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.400 18:15:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.400 18:15:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:20.400 18:15:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:20.400 18:15:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.400 18:15:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:20.400 18:15:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:20.658 18:15:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:20.658 18:15:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:20.658 18:15:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:20.658 18:15:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.658 18:15:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.658 18:15:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:20.658 18:15:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:20.658 18:15:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.658 18:15:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:20.658 18:15:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:20.919 18:15:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:20.919 18:15:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:20.919 18:15:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:20.919 18:15:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.919 18:15:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.919 18:15:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:20.919 18:15:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:20.919 18:15:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.919 18:15:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:20.919 18:15:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.919 18:15:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:21.178 18:15:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:21.179 18:15:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:21.179 18:15:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:21.179 18:15:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:21.179 18:15:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:21.179 18:15:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:21.179 18:15:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:21.179 18:15:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:21.179 18:15:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:21.179 18:15:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:21.179 18:15:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:21.179 18:15:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:21.179 18:15:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:21.179 18:15:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.179 18:15:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:21.179 18:15:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:21.179 18:15:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:21.179 18:15:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:21.179 18:15:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:21.179 18:15:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.179 18:15:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:21.179 18:15:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:21.179 18:15:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:21.179 18:15:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:21.179 18:15:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:21.179 18:15:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:21.179 18:15:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:21.179 18:15:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:21.437 /dev/nbd0 00:06:21.437 18:15:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:21.437 18:15:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:21.437 18:15:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:21.437 18:15:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:21.437 18:15:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:21.437 18:15:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:21.437 18:15:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:21.437 18:15:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:21.437 18:15:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:21.437 18:15:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:21.437 18:15:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:21.437 1+0 records in 00:06:21.437 1+0 records out 00:06:21.437 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000461681 s, 8.9 MB/s 00:06:21.437 18:15:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:21.437 18:15:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:21.437 18:15:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:21.437 18:15:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:21.437 18:15:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:21.437 18:15:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:21.437 18:15:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:21.437 18:15:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:06:21.696 /dev/nbd1 00:06:21.696 18:15:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:21.696 18:15:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:21.696 18:15:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:21.696 18:15:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:21.696 18:15:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:21.696 18:15:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:21.696 18:15:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:21.696 18:15:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:21.696 18:15:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:21.696 18:15:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:21.696 18:15:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:21.696 1+0 records in 00:06:21.696 1+0 records out 00:06:21.696 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000458764 s, 8.9 MB/s 00:06:21.696 18:15:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:21.696 18:15:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:21.696 18:15:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:21.696 18:15:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:21.696 18:15:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:21.696 18:15:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:21.696 18:15:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:21.696 18:15:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:06:21.954 /dev/nbd10 00:06:21.954 18:15:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:21.954 18:15:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:21.954 18:15:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:06:21.954 18:15:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:21.954 18:15:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:21.954 18:15:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:21.954 18:15:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:06:21.954 18:15:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:21.954 18:15:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:21.954 18:15:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:21.954 18:15:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:21.954 1+0 records in 00:06:21.954 1+0 records out 00:06:21.954 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00048726 s, 8.4 MB/s 00:06:21.954 18:15:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:21.954 18:15:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:21.954 18:15:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:21.954 18:15:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:21.954 18:15:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:21.955 18:15:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:21.955 18:15:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:21.955 18:15:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:06:22.215 /dev/nbd11 00:06:22.216 18:15:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:22.216 18:15:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:22.216 18:15:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:06:22.216 18:15:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:22.216 18:15:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:22.216 18:15:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:22.216 18:15:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:06:22.216 18:15:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:22.216 18:15:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:22.216 18:15:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:22.216 18:15:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:22.216 1+0 records in 00:06:22.216 1+0 records out 00:06:22.216 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000413871 s, 9.9 MB/s 00:06:22.216 18:15:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:22.216 18:15:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:22.216 18:15:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:22.216 18:15:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:22.216 18:15:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:22.216 18:15:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:22.216 18:15:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:22.216 18:15:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:06:22.216 /dev/nbd12 00:06:22.216 18:15:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:22.216 18:15:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:22.216 18:15:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:06:22.216 18:15:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:22.216 18:15:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:22.216 18:15:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:22.216 18:15:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:06:22.475 18:15:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:22.475 18:15:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:22.475 18:15:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:22.475 18:15:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:22.475 1+0 records in 00:06:22.475 1+0 records out 00:06:22.475 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00119635 s, 3.4 MB/s 00:06:22.475 18:15:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:22.475 18:15:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:22.475 18:15:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:22.475 18:15:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:22.475 18:15:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:22.475 18:15:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:22.475 18:15:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:22.475 18:15:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:06:22.475 /dev/nbd13 00:06:22.475 18:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:22.475 18:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:22.475 18:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:06:22.475 18:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:22.475 18:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:22.475 18:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:22.475 18:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:06:22.475 18:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:22.475 18:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:22.475 18:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:22.476 18:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:22.476 1+0 records in 00:06:22.476 1+0 records out 00:06:22.476 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0005422 s, 7.6 MB/s 00:06:22.476 18:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:22.476 18:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:22.476 18:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:22.476 18:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:22.476 18:15:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:22.476 18:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:22.476 18:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:22.476 18:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:22.476 18:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.476 18:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:22.734 18:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:22.734 { 00:06:22.734 "nbd_device": "/dev/nbd0", 00:06:22.734 "bdev_name": "Nvme0n1" 00:06:22.734 }, 00:06:22.734 { 00:06:22.734 "nbd_device": "/dev/nbd1", 00:06:22.734 "bdev_name": "Nvme1n1" 00:06:22.734 }, 00:06:22.734 { 00:06:22.734 "nbd_device": "/dev/nbd10", 00:06:22.734 "bdev_name": "Nvme2n1" 00:06:22.734 }, 00:06:22.734 { 00:06:22.734 "nbd_device": "/dev/nbd11", 00:06:22.734 "bdev_name": "Nvme2n2" 00:06:22.734 }, 00:06:22.734 { 00:06:22.734 "nbd_device": "/dev/nbd12", 00:06:22.734 "bdev_name": "Nvme2n3" 00:06:22.734 }, 00:06:22.734 { 00:06:22.734 "nbd_device": "/dev/nbd13", 00:06:22.734 "bdev_name": "Nvme3n1" 00:06:22.734 } 00:06:22.734 ]' 00:06:22.734 18:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:22.734 18:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:22.734 { 00:06:22.734 "nbd_device": "/dev/nbd0", 00:06:22.734 "bdev_name": "Nvme0n1" 00:06:22.734 }, 00:06:22.734 { 00:06:22.734 "nbd_device": "/dev/nbd1", 00:06:22.734 "bdev_name": "Nvme1n1" 00:06:22.734 }, 00:06:22.734 { 00:06:22.734 "nbd_device": "/dev/nbd10", 00:06:22.734 "bdev_name": "Nvme2n1" 00:06:22.734 }, 00:06:22.734 { 00:06:22.734 "nbd_device": "/dev/nbd11", 00:06:22.734 "bdev_name": "Nvme2n2" 00:06:22.734 }, 00:06:22.734 { 00:06:22.734 "nbd_device": "/dev/nbd12", 00:06:22.734 "bdev_name": "Nvme2n3" 00:06:22.734 }, 00:06:22.734 { 00:06:22.734 "nbd_device": "/dev/nbd13", 00:06:22.734 "bdev_name": "Nvme3n1" 00:06:22.734 } 00:06:22.734 ]' 00:06:22.734 18:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:22.734 /dev/nbd1 00:06:22.734 /dev/nbd10 00:06:22.734 /dev/nbd11 00:06:22.734 /dev/nbd12 00:06:22.734 /dev/nbd13' 00:06:22.734 18:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:22.734 /dev/nbd1 00:06:22.734 /dev/nbd10 00:06:22.734 /dev/nbd11 00:06:22.734 /dev/nbd12 00:06:22.734 /dev/nbd13' 00:06:22.734 18:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:22.734 18:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:06:22.734 18:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:06:22.734 18:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:06:22.734 18:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:06:22.734 18:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:06:22.734 18:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:22.734 18:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:22.734 18:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:22.734 18:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:22.734 18:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:22.734 18:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:06:22.734 256+0 records in 00:06:22.734 256+0 records out 00:06:22.734 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0089614 s, 117 MB/s 00:06:22.734 18:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:22.734 18:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:23.032 256+0 records in 00:06:23.032 256+0 records out 00:06:23.032 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0627132 s, 16.7 MB/s 00:06:23.032 18:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:23.032 18:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:23.032 256+0 records in 00:06:23.032 256+0 records out 00:06:23.032 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0689641 s, 15.2 MB/s 00:06:23.032 18:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:23.032 18:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:23.032 256+0 records in 00:06:23.032 256+0 records out 00:06:23.032 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0664842 s, 15.8 MB/s 00:06:23.032 18:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:23.032 18:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:06:23.032 256+0 records in 00:06:23.032 256+0 records out 00:06:23.032 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0659365 s, 15.9 MB/s 00:06:23.032 18:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:23.032 18:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:06:23.294 256+0 records in 00:06:23.294 256+0 records out 00:06:23.294 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.063827 s, 16.4 MB/s 00:06:23.294 18:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:23.294 18:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:06:23.294 256+0 records in 00:06:23.294 256+0 records out 00:06:23.294 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0644894 s, 16.3 MB/s 00:06:23.294 18:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:06:23.294 18:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:23.294 18:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:23.294 18:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:23.294 18:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:23.294 18:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:23.294 18:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:23.294 18:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:23.294 18:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:06:23.294 18:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:23.294 18:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:06:23.294 18:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:23.294 18:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:06:23.294 18:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:23.294 18:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:06:23.294 18:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:23.294 18:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:06:23.294 18:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:23.294 18:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:06:23.294 18:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:23.294 18:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:23.294 18:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.294 18:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:23.294 18:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:23.294 18:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:23.294 18:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:23.294 18:15:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:23.552 18:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:23.552 18:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:23.552 18:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:23.552 18:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:23.552 18:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:23.552 18:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:23.552 18:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:23.552 18:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:23.552 18:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:23.552 18:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:23.811 18:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:23.811 18:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:23.811 18:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:23.811 18:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:23.811 18:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:23.811 18:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:23.811 18:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:23.811 18:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:23.811 18:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:23.811 18:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:06:24.070 18:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:06:24.070 18:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:06:24.070 18:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:06:24.070 18:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:24.070 18:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:24.070 18:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:06:24.070 18:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:24.070 18:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:24.070 18:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:24.070 18:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:06:24.070 18:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:06:24.070 18:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:06:24.070 18:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:06:24.070 18:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:24.070 18:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:24.070 18:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:06:24.070 18:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:24.070 18:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:24.070 18:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:24.070 18:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:06:24.329 18:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:06:24.329 18:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:06:24.329 18:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:06:24.329 18:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:24.329 18:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:24.329 18:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:06:24.329 18:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:24.329 18:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:24.329 18:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:24.329 18:15:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:06:24.587 18:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:06:24.587 18:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:06:24.587 18:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:06:24.587 18:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:24.587 18:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:24.587 18:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:06:24.587 18:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:24.587 18:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:24.587 18:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:24.587 18:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.587 18:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:24.845 18:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:24.845 18:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:24.845 18:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:24.845 18:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:24.845 18:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:24.845 18:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:24.845 18:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:24.845 18:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:24.845 18:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:24.845 18:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:06:24.845 18:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:24.845 18:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:06:24.845 18:15:43 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:24.845 18:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.845 18:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:06:24.845 18:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:06:25.104 malloc_lvol_verify 00:06:25.104 18:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:06:25.104 c3495f30-c284-4345-9e4e-07da338e9d9d 00:06:25.362 18:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:06:25.362 93cf6f19-c8ff-4d02-86ad-3d993cafec56 00:06:25.362 18:15:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:06:25.620 /dev/nbd0 00:06:25.620 18:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:06:25.620 18:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:06:25.620 18:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:06:25.620 18:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:06:25.620 18:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:06:25.620 mke2fs 1.47.0 (5-Feb-2023) 00:06:25.620 Discarding device blocks: 0/4096 done 00:06:25.620 Creating filesystem with 4096 1k blocks and 1024 inodes 00:06:25.620 00:06:25.620 Allocating group tables: 0/1 done 00:06:25.620 Writing inode tables: 0/1 done 00:06:25.620 Creating journal (1024 blocks): done 00:06:25.620 Writing superblocks and filesystem accounting information: 0/1 done 00:06:25.620 00:06:25.620 18:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:25.620 18:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.620 18:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:25.620 18:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:25.620 18:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:25.620 18:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:25.620 18:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:25.877 18:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:25.877 18:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:25.877 18:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:25.877 18:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:25.877 18:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:25.877 18:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:25.877 18:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:25.877 18:15:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:25.877 18:15:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 59904 00:06:25.877 18:15:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 59904 ']' 00:06:25.877 18:15:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 59904 00:06:25.878 18:15:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:06:25.878 18:15:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:25.878 18:15:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59904 00:06:25.878 18:15:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:25.878 18:15:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:25.878 killing process with pid 59904 00:06:25.878 18:15:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59904' 00:06:25.878 18:15:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 59904 00:06:25.878 18:15:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 59904 00:06:26.447 18:15:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:06:26.447 00:06:26.447 real 0m9.436s 00:06:26.447 user 0m13.558s 00:06:26.447 sys 0m3.031s 00:06:26.447 18:15:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.447 ************************************ 00:06:26.447 END TEST bdev_nbd 00:06:26.447 ************************************ 00:06:26.447 18:15:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:26.447 18:15:45 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:06:26.447 18:15:45 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:06:26.447 skipping fio tests on NVMe due to multi-ns failures. 00:06:26.447 18:15:45 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:06:26.447 18:15:45 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:26.447 18:15:45 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:26.447 18:15:45 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:26.447 18:15:45 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.447 18:15:45 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:26.447 ************************************ 00:06:26.447 START TEST bdev_verify 00:06:26.447 ************************************ 00:06:26.447 18:15:45 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:26.706 [2024-11-20 18:15:45.134773] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:06:26.706 [2024-11-20 18:15:45.134888] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60274 ] 00:06:26.706 [2024-11-20 18:15:45.289667] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:26.964 [2024-11-20 18:15:45.367156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:26.964 [2024-11-20 18:15:45.367346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.538 Running I/O for 5 seconds... 00:06:29.862 22016.00 IOPS, 86.00 MiB/s [2024-11-20T18:15:49.432Z] 24448.00 IOPS, 95.50 MiB/s [2024-11-20T18:15:50.375Z] 23744.00 IOPS, 92.75 MiB/s [2024-11-20T18:15:51.340Z] 22912.00 IOPS, 89.50 MiB/s [2024-11-20T18:15:51.340Z] 22233.60 IOPS, 86.85 MiB/s 00:06:32.711 Latency(us) 00:06:32.711 [2024-11-20T18:15:51.340Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:32.711 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:32.711 Verification LBA range: start 0x0 length 0xbd0bd 00:06:32.711 Nvme0n1 : 5.05 1823.55 7.12 0.00 0.00 69869.90 12250.19 85499.27 00:06:32.711 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:32.711 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:06:32.711 Nvme0n1 : 5.04 1828.02 7.14 0.00 0.00 69740.58 12603.08 92355.35 00:06:32.711 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:32.711 Verification LBA range: start 0x0 length 0xa0000 00:06:32.711 Nvme1n1 : 5.06 1823.05 7.12 0.00 0.00 69734.88 15022.87 75416.81 00:06:32.711 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:32.711 Verification LBA range: start 0xa0000 length 0xa0000 00:06:32.711 Nvme1n1 : 5.07 1830.34 7.15 0.00 0.00 69470.27 7208.96 83482.78 00:06:32.711 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:32.711 Verification LBA range: start 0x0 length 0x80000 00:06:32.711 Nvme2n1 : 5.07 1828.63 7.14 0.00 0.00 69416.78 5671.38 65334.35 00:06:32.711 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:32.711 Verification LBA range: start 0x80000 length 0x80000 00:06:32.711 Nvme2n1 : 5.08 1838.79 7.18 0.00 0.00 69101.42 8620.50 70980.53 00:06:32.711 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:32.711 Verification LBA range: start 0x0 length 0x80000 00:06:32.711 Nvme2n2 : 5.09 1837.43 7.18 0.00 0.00 68974.78 7763.50 64527.75 00:06:32.711 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:32.711 Verification LBA range: start 0x80000 length 0x80000 00:06:32.711 Nvme2n2 : 5.08 1838.31 7.18 0.00 0.00 68924.95 8973.39 64527.75 00:06:32.712 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:32.712 Verification LBA range: start 0x0 length 0x80000 00:06:32.712 Nvme2n3 : 5.09 1836.46 7.17 0.00 0.00 68877.64 9527.93 66947.54 00:06:32.712 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:32.712 Verification LBA range: start 0x80000 length 0x80000 00:06:32.712 Nvme2n3 : 5.08 1837.79 7.18 0.00 0.00 68752.07 9225.45 67350.84 00:06:32.712 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:32.712 Verification LBA range: start 0x0 length 0x20000 00:06:32.712 Nvme3n1 : 5.09 1835.99 7.17 0.00 0.00 68741.17 7813.91 70173.93 00:06:32.712 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:32.712 Verification LBA range: start 0x20000 length 0x20000 00:06:32.712 Nvme3n1 : 5.09 1837.29 7.18 0.00 0.00 68616.15 9628.75 69770.63 00:06:32.712 [2024-11-20T18:15:51.341Z] =================================================================================================================== 00:06:32.712 [2024-11-20T18:15:51.341Z] Total : 21995.66 85.92 0.00 0.00 69182.70 5671.38 92355.35 00:06:33.656 00:06:33.656 real 0m7.021s 00:06:33.656 user 0m13.168s 00:06:33.656 sys 0m0.197s 00:06:33.656 18:15:52 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:33.656 18:15:52 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:06:33.656 ************************************ 00:06:33.656 END TEST bdev_verify 00:06:33.656 ************************************ 00:06:33.656 18:15:52 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:33.656 18:15:52 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:33.656 18:15:52 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:33.656 18:15:52 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:33.656 ************************************ 00:06:33.656 START TEST bdev_verify_big_io 00:06:33.656 ************************************ 00:06:33.656 18:15:52 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:33.656 [2024-11-20 18:15:52.213654] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:06:33.656 [2024-11-20 18:15:52.213769] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60372 ] 00:06:33.918 [2024-11-20 18:15:52.374000] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:33.918 [2024-11-20 18:15:52.472055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.918 [2024-11-20 18:15:52.472062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.861 Running I/O for 5 seconds... 00:06:37.975 670.00 IOPS, 41.88 MiB/s [2024-11-20T18:15:57.978Z] 1193.50 IOPS, 74.59 MiB/s [2024-11-20T18:15:58.911Z] 1333.67 IOPS, 83.35 MiB/s [2024-11-20T18:15:59.478Z] 1341.25 IOPS, 83.83 MiB/s [2024-11-20T18:15:59.478Z] 1574.20 IOPS, 98.39 MiB/s 00:06:40.849 Latency(us) 00:06:40.849 [2024-11-20T18:15:59.478Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:40.849 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:40.849 Verification LBA range: start 0x0 length 0xbd0b 00:06:40.849 Nvme0n1 : 5.66 111.21 6.95 0.00 0.00 1100458.52 18551.73 1232480.10 00:06:40.849 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:40.849 Verification LBA range: start 0xbd0b length 0xbd0b 00:06:40.849 Nvme0n1 : 6.09 163.88 10.24 0.00 0.00 636986.20 1027.15 1122782.92 00:06:40.849 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:40.849 Verification LBA range: start 0x0 length 0xa000 00:06:40.849 Nvme1n1 : 5.66 113.09 7.07 0.00 0.00 1047267.80 107277.39 1032444.06 00:06:40.849 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:40.849 Verification LBA range: start 0xa000 length 0xa000 00:06:40.849 Nvme1n1 : 5.50 116.44 7.28 0.00 0.00 1057247.23 25811.10 1219574.55 00:06:40.849 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:40.849 Verification LBA range: start 0x0 length 0x8000 00:06:40.849 Nvme2n1 : 5.89 119.54 7.47 0.00 0.00 960660.37 75013.51 1006632.96 00:06:40.849 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:40.849 Verification LBA range: start 0x8000 length 0x8000 00:06:40.849 Nvme2n1 : 5.66 117.12 7.32 0.00 0.00 1004351.25 115343.36 1019538.51 00:06:40.849 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:40.849 Verification LBA range: start 0x0 length 0x8000 00:06:40.849 Nvme2n2 : 5.99 124.25 7.77 0.00 0.00 892551.53 53235.40 1025991.29 00:06:40.849 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:40.849 Verification LBA range: start 0x8000 length 0x8000 00:06:40.849 Nvme2n2 : 5.89 125.93 7.87 0.00 0.00 908271.62 62914.56 1032444.06 00:06:40.850 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:40.850 Verification LBA range: start 0x0 length 0x8000 00:06:40.850 Nvme2n3 : 6.05 131.49 8.22 0.00 0.00 816358.27 55655.19 1051802.39 00:06:40.850 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:40.850 Verification LBA range: start 0x8000 length 0x8000 00:06:40.850 Nvme2n3 : 5.89 130.30 8.14 0.00 0.00 854385.95 79449.80 1058255.16 00:06:40.850 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:40.850 Verification LBA range: start 0x0 length 0x2000 00:06:40.850 Nvme3n1 : 6.09 147.10 9.19 0.00 0.00 707532.87 409.60 1077613.49 00:06:40.850 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:40.850 Verification LBA range: start 0x2000 length 0x2000 00:06:40.850 Nvme3n1 : 5.95 139.80 8.74 0.00 0.00 771435.34 11040.30 1084066.26 00:06:40.850 [2024-11-20T18:15:59.479Z] =================================================================================================================== 00:06:40.850 [2024-11-20T18:15:59.479Z] Total : 1540.15 96.26 0.00 0.00 876801.77 409.60 1232480.10 00:06:42.755 00:06:42.755 real 0m9.145s 00:06:42.755 user 0m17.372s 00:06:42.755 sys 0m0.227s 00:06:42.755 18:16:01 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:42.755 ************************************ 00:06:42.755 END TEST bdev_verify_big_io 00:06:42.755 ************************************ 00:06:42.755 18:16:01 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:06:42.755 18:16:01 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:42.755 18:16:01 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:42.755 18:16:01 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:42.755 18:16:01 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:42.755 ************************************ 00:06:42.755 START TEST bdev_write_zeroes 00:06:42.755 ************************************ 00:06:42.755 18:16:01 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:43.128 [2024-11-20 18:16:01.420742] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:06:43.128 [2024-11-20 18:16:01.420858] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60481 ] 00:06:43.128 [2024-11-20 18:16:01.572805] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.387 [2024-11-20 18:16:01.670510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.644 Running I/O for 1 seconds... 00:06:45.064 69504.00 IOPS, 271.50 MiB/s 00:06:45.064 Latency(us) 00:06:45.064 [2024-11-20T18:16:03.693Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:45.064 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:45.065 Nvme0n1 : 1.02 11541.32 45.08 0.00 0.00 11068.46 6755.25 23794.61 00:06:45.065 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:45.065 Nvme1n1 : 1.02 11527.91 45.03 0.00 0.00 11067.69 8519.68 24298.73 00:06:45.065 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:45.065 Nvme2n1 : 1.02 11514.84 44.98 0.00 0.00 11056.02 8469.27 24097.08 00:06:45.065 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:45.065 Nvme2n2 : 1.02 11501.80 44.93 0.00 0.00 11050.65 8469.27 22988.01 00:06:45.065 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:45.065 Nvme2n3 : 1.02 11488.78 44.88 0.00 0.00 11039.71 8116.38 21374.82 00:06:45.065 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:45.065 Nvme3n1 : 1.03 11475.86 44.83 0.00 0.00 11027.36 7057.72 21979.77 00:06:45.065 [2024-11-20T18:16:03.694Z] =================================================================================================================== 00:06:45.065 [2024-11-20T18:16:03.694Z] Total : 69050.51 269.73 0.00 0.00 11051.65 6755.25 24298.73 00:06:45.655 00:06:45.655 real 0m2.640s 00:06:45.655 user 0m2.342s 00:06:45.655 sys 0m0.184s 00:06:45.655 18:16:03 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:45.655 ************************************ 00:06:45.655 END TEST bdev_write_zeroes 00:06:45.655 ************************************ 00:06:45.655 18:16:03 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:06:45.655 18:16:04 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:45.655 18:16:04 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:45.655 18:16:04 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:45.655 18:16:04 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:45.655 ************************************ 00:06:45.655 START TEST bdev_json_nonenclosed 00:06:45.655 ************************************ 00:06:45.655 18:16:04 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:45.655 [2024-11-20 18:16:04.116547] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:06:45.655 [2024-11-20 18:16:04.116662] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60534 ] 00:06:45.655 [2024-11-20 18:16:04.277540] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.916 [2024-11-20 18:16:04.378089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.916 [2024-11-20 18:16:04.378185] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:06:45.916 [2024-11-20 18:16:04.378203] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:45.916 [2024-11-20 18:16:04.378211] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:46.178 00:06:46.178 real 0m0.498s 00:06:46.178 user 0m0.311s 00:06:46.178 sys 0m0.083s 00:06:46.178 18:16:04 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:46.178 18:16:04 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:06:46.178 ************************************ 00:06:46.178 END TEST bdev_json_nonenclosed 00:06:46.178 ************************************ 00:06:46.178 18:16:04 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:46.178 18:16:04 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:46.178 18:16:04 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.178 18:16:04 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:46.178 ************************************ 00:06:46.178 START TEST bdev_json_nonarray 00:06:46.178 ************************************ 00:06:46.178 18:16:04 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:46.178 [2024-11-20 18:16:04.675394] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:06:46.178 [2024-11-20 18:16:04.675504] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60554 ] 00:06:46.439 [2024-11-20 18:16:04.835406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.439 [2024-11-20 18:16:04.935903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.439 [2024-11-20 18:16:04.935983] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:06:46.439 [2024-11-20 18:16:04.936000] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:46.439 [2024-11-20 18:16:04.936010] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:46.700 00:06:46.700 real 0m0.493s 00:06:46.700 user 0m0.295s 00:06:46.700 sys 0m0.094s 00:06:46.700 ************************************ 00:06:46.700 END TEST bdev_json_nonarray 00:06:46.700 ************************************ 00:06:46.700 18:16:05 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:46.700 18:16:05 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:06:46.700 18:16:05 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:06:46.700 18:16:05 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:06:46.700 18:16:05 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:06:46.700 18:16:05 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:06:46.700 18:16:05 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:06:46.700 18:16:05 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:06:46.700 18:16:05 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:46.700 18:16:05 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:06:46.700 18:16:05 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:06:46.700 18:16:05 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:06:46.700 18:16:05 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:06:46.700 00:06:46.700 real 0m36.496s 00:06:46.700 user 0m57.019s 00:06:46.700 sys 0m4.938s 00:06:46.700 ************************************ 00:06:46.700 END TEST blockdev_nvme 00:06:46.700 ************************************ 00:06:46.700 18:16:05 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:46.700 18:16:05 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:46.700 18:16:05 -- spdk/autotest.sh@209 -- # uname -s 00:06:46.700 18:16:05 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:06:46.700 18:16:05 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:46.700 18:16:05 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:46.700 18:16:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.700 18:16:05 -- common/autotest_common.sh@10 -- # set +x 00:06:46.700 ************************************ 00:06:46.700 START TEST blockdev_nvme_gpt 00:06:46.700 ************************************ 00:06:46.700 18:16:05 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:46.700 * Looking for test storage... 00:06:46.700 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:46.700 18:16:05 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:46.700 18:16:05 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lcov --version 00:06:46.700 18:16:05 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:46.961 18:16:05 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:46.961 18:16:05 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:46.961 18:16:05 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:46.961 18:16:05 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:46.962 18:16:05 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:06:46.962 18:16:05 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:06:46.962 18:16:05 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:06:46.962 18:16:05 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:06:46.962 18:16:05 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:06:46.962 18:16:05 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:06:46.962 18:16:05 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:06:46.962 18:16:05 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:46.962 18:16:05 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:06:46.962 18:16:05 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:06:46.962 18:16:05 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:46.962 18:16:05 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:46.962 18:16:05 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:06:46.962 18:16:05 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:06:46.962 18:16:05 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:46.962 18:16:05 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:06:46.962 18:16:05 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:06:46.962 18:16:05 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:06:46.962 18:16:05 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:06:46.962 18:16:05 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:46.962 18:16:05 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:06:46.962 18:16:05 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:06:46.962 18:16:05 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:46.962 18:16:05 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:46.962 18:16:05 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:06:46.962 18:16:05 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:46.962 18:16:05 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:46.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.962 --rc genhtml_branch_coverage=1 00:06:46.962 --rc genhtml_function_coverage=1 00:06:46.962 --rc genhtml_legend=1 00:06:46.962 --rc geninfo_all_blocks=1 00:06:46.962 --rc geninfo_unexecuted_blocks=1 00:06:46.962 00:06:46.962 ' 00:06:46.962 18:16:05 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:46.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.962 --rc genhtml_branch_coverage=1 00:06:46.962 --rc genhtml_function_coverage=1 00:06:46.962 --rc genhtml_legend=1 00:06:46.962 --rc geninfo_all_blocks=1 00:06:46.962 --rc geninfo_unexecuted_blocks=1 00:06:46.962 00:06:46.962 ' 00:06:46.962 18:16:05 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:46.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.962 --rc genhtml_branch_coverage=1 00:06:46.962 --rc genhtml_function_coverage=1 00:06:46.962 --rc genhtml_legend=1 00:06:46.962 --rc geninfo_all_blocks=1 00:06:46.962 --rc geninfo_unexecuted_blocks=1 00:06:46.962 00:06:46.962 ' 00:06:46.962 18:16:05 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:46.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.962 --rc genhtml_branch_coverage=1 00:06:46.962 --rc genhtml_function_coverage=1 00:06:46.962 --rc genhtml_legend=1 00:06:46.962 --rc geninfo_all_blocks=1 00:06:46.962 --rc geninfo_unexecuted_blocks=1 00:06:46.962 00:06:46.962 ' 00:06:46.962 18:16:05 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:46.962 18:16:05 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:06:46.962 18:16:05 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:46.962 18:16:05 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:46.962 18:16:05 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:46.962 18:16:05 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:46.962 18:16:05 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:46.962 18:16:05 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:46.962 18:16:05 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:06:46.962 18:16:05 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:06:46.962 18:16:05 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:06:46.962 18:16:05 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:06:46.962 18:16:05 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:06:46.962 18:16:05 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:06:46.962 18:16:05 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:06:46.962 18:16:05 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:06:46.962 18:16:05 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:06:46.962 18:16:05 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:06:46.962 18:16:05 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:06:46.962 18:16:05 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:06:46.962 18:16:05 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:06:46.962 18:16:05 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:06:46.962 18:16:05 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:06:46.962 18:16:05 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:06:46.962 18:16:05 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=60638 00:06:46.962 18:16:05 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:46.962 18:16:05 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 60638 00:06:46.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.962 18:16:05 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 60638 ']' 00:06:46.962 18:16:05 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.962 18:16:05 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:46.962 18:16:05 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.962 18:16:05 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:46.962 18:16:05 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:46.962 18:16:05 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:46.962 [2024-11-20 18:16:05.441218] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:06:46.962 [2024-11-20 18:16:05.441342] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60638 ] 00:06:47.223 [2024-11-20 18:16:05.601537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.223 [2024-11-20 18:16:05.704194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.795 18:16:06 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:47.795 18:16:06 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:06:47.795 18:16:06 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:06:47.795 18:16:06 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:06:47.795 18:16:06 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:48.055 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:48.316 Waiting for block devices as requested 00:06:48.316 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:48.316 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:48.316 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:06:48.577 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:06:53.856 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:06:53.856 18:16:12 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:06:53.856 18:16:12 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:06:53.856 18:16:12 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:06:53.856 18:16:12 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local nvme bdf 00:06:53.856 18:16:12 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:53.856 18:16:12 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:06:53.856 18:16:12 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:06:53.856 18:16:12 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:53.856 18:16:12 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:53.856 18:16:12 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:53.856 18:16:12 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:06:53.856 18:16:12 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:06:53.856 18:16:12 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:53.856 18:16:12 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:53.856 18:16:12 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:53.856 18:16:12 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:06:53.856 18:16:12 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:06:53.856 18:16:12 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:06:53.856 18:16:12 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:53.856 18:16:12 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:53.856 18:16:12 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:06:53.856 18:16:12 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:06:53.856 18:16:12 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:06:53.856 18:16:12 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:53.856 18:16:12 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:53.856 18:16:12 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:06:53.856 18:16:12 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:06:53.856 18:16:12 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:06:53.856 18:16:12 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:53.856 18:16:12 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:53.856 18:16:12 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:06:53.856 18:16:12 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:06:53.856 18:16:12 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:06:53.856 18:16:12 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:53.857 18:16:12 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:53.857 18:16:12 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:06:53.857 18:16:12 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:06:53.857 18:16:12 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:06:53.857 18:16:12 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:53.857 18:16:12 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:06:53.857 18:16:12 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:06:53.857 18:16:12 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:06:53.857 18:16:12 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:06:53.857 18:16:12 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:06:53.857 18:16:12 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:06:53.857 18:16:12 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:06:53.857 18:16:12 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:06:53.857 BYT; 00:06:53.857 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:06:53.857 18:16:12 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:06:53.857 BYT; 00:06:53.857 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:06:53.857 18:16:12 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:06:53.857 18:16:12 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:06:53.857 18:16:12 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:06:53.857 18:16:12 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:06:53.857 18:16:12 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:06:53.857 18:16:12 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:06:53.857 18:16:12 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:06:53.857 18:16:12 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:06:53.857 18:16:12 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:53.857 18:16:12 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:53.857 18:16:12 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:06:53.857 18:16:12 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:06:53.857 18:16:12 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:53.857 18:16:12 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:06:53.857 18:16:12 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:53.857 18:16:12 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:53.857 18:16:12 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:53.857 18:16:12 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:06:53.857 18:16:12 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:06:53.857 18:16:12 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:53.857 18:16:12 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:53.857 18:16:12 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:06:53.857 18:16:12 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:06:53.857 18:16:12 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:53.857 18:16:12 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:06:53.857 18:16:12 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:53.857 18:16:12 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:53.857 18:16:12 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:53.857 18:16:12 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:06:54.790 The operation has completed successfully. 00:06:54.790 18:16:13 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:06:55.721 The operation has completed successfully. 00:06:55.721 18:16:14 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:55.978 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:56.542 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:56.542 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:56.542 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:56.542 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:56.542 18:16:15 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:06:56.542 18:16:15 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.542 18:16:15 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:56.542 [] 00:06:56.542 18:16:15 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.542 18:16:15 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:06:56.542 18:16:15 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:06:56.542 18:16:15 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:56.542 18:16:15 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:56.542 18:16:15 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:56.542 18:16:15 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.542 18:16:15 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:56.799 18:16:15 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.799 18:16:15 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:06:56.799 18:16:15 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.799 18:16:15 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:56.799 18:16:15 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.799 18:16:15 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:06:56.799 18:16:15 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:06:56.799 18:16:15 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.799 18:16:15 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:56.799 18:16:15 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.799 18:16:15 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:06:56.799 18:16:15 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.799 18:16:15 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:57.057 18:16:15 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.057 18:16:15 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:57.057 18:16:15 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.057 18:16:15 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:57.057 18:16:15 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.057 18:16:15 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:06:57.057 18:16:15 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:06:57.057 18:16:15 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.057 18:16:15 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:57.057 18:16:15 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:06:57.057 18:16:15 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.057 18:16:15 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:06:57.057 18:16:15 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:06:57.058 18:16:15 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "06467f85-3cc1-4768-952e-d674d3cfb1fa"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "06467f85-3cc1-4768-952e-d674d3cfb1fa",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "94b7742b-556f-4424-ae16-f261a3c000cd"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "94b7742b-556f-4424-ae16-f261a3c000cd",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "aac402f0-562d-4b84-936b-a8a2ec572df5"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "aac402f0-562d-4b84-936b-a8a2ec572df5",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "023eaa3c-4623-4937-aa53-c39e7a12bea2"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "023eaa3c-4623-4937-aa53-c39e7a12bea2",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "fced1454-0b95-4b37-b82a-50715f00c890"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "fced1454-0b95-4b37-b82a-50715f00c890",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:57.058 18:16:15 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:06:57.058 18:16:15 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:06:57.058 18:16:15 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:06:57.058 18:16:15 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 60638 00:06:57.058 18:16:15 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 60638 ']' 00:06:57.058 18:16:15 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 60638 00:06:57.058 18:16:15 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:06:57.058 18:16:15 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:57.058 18:16:15 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60638 00:06:57.058 18:16:15 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:57.058 killing process with pid 60638 00:06:57.058 18:16:15 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:57.058 18:16:15 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60638' 00:06:57.058 18:16:15 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 60638 00:06:57.058 18:16:15 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 60638 00:06:58.512 18:16:16 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:58.512 18:16:16 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:58.512 18:16:16 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:06:58.512 18:16:16 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:58.512 18:16:16 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:58.512 ************************************ 00:06:58.512 START TEST bdev_hello_world 00:06:58.512 ************************************ 00:06:58.512 18:16:16 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:58.512 [2024-11-20 18:16:16.785743] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:06:58.512 [2024-11-20 18:16:16.785835] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61258 ] 00:06:58.512 [2024-11-20 18:16:16.934046] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.512 [2024-11-20 18:16:17.009260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.078 [2024-11-20 18:16:17.497025] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:59.078 [2024-11-20 18:16:17.497060] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:59.078 [2024-11-20 18:16:17.497076] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:59.078 [2024-11-20 18:16:17.498985] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:59.078 [2024-11-20 18:16:17.499425] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:59.078 [2024-11-20 18:16:17.499447] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:59.078 [2024-11-20 18:16:17.499660] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:59.078 00:06:59.078 [2024-11-20 18:16:17.499683] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:59.646 00:06:59.646 real 0m1.319s 00:06:59.646 user 0m1.061s 00:06:59.646 sys 0m0.154s 00:06:59.646 18:16:18 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:59.646 18:16:18 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:59.646 ************************************ 00:06:59.646 END TEST bdev_hello_world 00:06:59.646 ************************************ 00:06:59.646 18:16:18 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:06:59.646 18:16:18 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:59.646 18:16:18 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:59.646 18:16:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:59.646 ************************************ 00:06:59.646 START TEST bdev_bounds 00:06:59.646 ************************************ 00:06:59.646 18:16:18 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:06:59.646 18:16:18 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61295 00:06:59.646 18:16:18 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:59.646 Process bdevio pid: 61295 00:06:59.646 18:16:18 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61295' 00:06:59.646 18:16:18 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61295 00:06:59.646 18:16:18 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 61295 ']' 00:06:59.646 18:16:18 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:59.646 18:16:18 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.646 18:16:18 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:59.646 18:16:18 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.646 18:16:18 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:59.646 18:16:18 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:59.646 [2024-11-20 18:16:18.174069] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:06:59.646 [2024-11-20 18:16:18.174202] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61295 ] 00:06:59.906 [2024-11-20 18:16:18.330160] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:59.906 [2024-11-20 18:16:18.410724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.906 [2024-11-20 18:16:18.410665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:59.906 [2024-11-20 18:16:18.410797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:00.475 18:16:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:00.475 18:16:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:07:00.475 18:16:19 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:00.736 I/O targets: 00:07:00.736 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:00.736 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:07:00.736 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:07:00.736 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:00.736 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:00.736 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:00.736 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:00.736 00:07:00.736 00:07:00.736 CUnit - A unit testing framework for C - Version 2.1-3 00:07:00.736 http://cunit.sourceforge.net/ 00:07:00.736 00:07:00.736 00:07:00.736 Suite: bdevio tests on: Nvme3n1 00:07:00.736 Test: blockdev write read block ...passed 00:07:00.736 Test: blockdev write zeroes read block ...passed 00:07:00.736 Test: blockdev write zeroes read no split ...passed 00:07:00.736 Test: blockdev write zeroes read split ...passed 00:07:00.736 Test: blockdev write zeroes read split partial ...passed 00:07:00.736 Test: blockdev reset ...[2024-11-20 18:16:19.154077] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:07:00.736 [2024-11-20 18:16:19.158159] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:07:00.736 passed 00:07:00.736 Test: blockdev write read 8 blocks ...passed 00:07:00.736 Test: blockdev write read size > 128k ...passed 00:07:00.736 Test: blockdev write read invalid size ...passed 00:07:00.736 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:00.736 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:00.736 Test: blockdev write read max offset ...passed 00:07:00.736 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:00.736 Test: blockdev writev readv 8 blocks ...passed 00:07:00.736 Test: blockdev writev readv 30 x 1block ...passed 00:07:00.736 Test: blockdev writev readv block ...passed 00:07:00.736 Test: blockdev writev readv size > 128k ...passed 00:07:00.736 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:00.736 Test: blockdev comparev and writev ...[2024-11-20 18:16:19.177346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b1604000 len:0x1000 00:07:00.736 [2024-11-20 18:16:19.177394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:00.736 passed 00:07:00.736 Test: blockdev nvme passthru rw ...passed 00:07:00.736 Test: blockdev nvme passthru vendor specific ...[2024-11-20 18:16:19.179771] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:00.736 [2024-11-20 18:16:19.179802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:00.736 passed 00:07:00.736 Test: blockdev nvme admin passthru ...passed 00:07:00.736 Test: blockdev copy ...passed 00:07:00.736 Suite: bdevio tests on: Nvme2n3 00:07:00.736 Test: blockdev write read block ...passed 00:07:00.736 Test: blockdev write zeroes read block ...passed 00:07:00.736 Test: blockdev write zeroes read no split ...passed 00:07:00.736 Test: blockdev write zeroes read split ...passed 00:07:00.736 Test: blockdev write zeroes read split partial ...passed 00:07:00.736 Test: blockdev reset ...[2024-11-20 18:16:19.236799] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:00.736 [2024-11-20 18:16:19.243230] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:00.736 passed 00:07:00.736 Test: blockdev write read 8 blocks ...passed 00:07:00.736 Test: blockdev write read size > 128k ...passed 00:07:00.736 Test: blockdev write read invalid size ...passed 00:07:00.736 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:00.736 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:00.736 Test: blockdev write read max offset ...passed 00:07:00.736 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:00.736 Test: blockdev writev readv 8 blocks ...passed 00:07:00.736 Test: blockdev writev readv 30 x 1block ...passed 00:07:00.736 Test: blockdev writev readv block ...passed 00:07:00.736 Test: blockdev writev readv size > 128k ...passed 00:07:00.736 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:00.736 Test: blockdev comparev and writev ...[2024-11-20 18:16:19.261113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b1602000 len:0x1000 00:07:00.736 [2024-11-20 18:16:19.261160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:00.736 passed 00:07:00.736 Test: blockdev nvme passthru rw ...passed 00:07:00.736 Test: blockdev nvme passthru vendor specific ...[2024-11-20 18:16:19.263733] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:00.736 [2024-11-20 18:16:19.263767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:00.736 passed 00:07:00.736 Test: blockdev nvme admin passthru ...passed 00:07:00.736 Test: blockdev copy ...passed 00:07:00.736 Suite: bdevio tests on: Nvme2n2 00:07:00.736 Test: blockdev write read block ...passed 00:07:00.736 Test: blockdev write zeroes read block ...passed 00:07:00.736 Test: blockdev write zeroes read no split ...passed 00:07:00.736 Test: blockdev write zeroes read split ...passed 00:07:00.736 Test: blockdev write zeroes read split partial ...passed 00:07:00.736 Test: blockdev reset ...[2024-11-20 18:16:19.318534] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:00.736 [2024-11-20 18:16:19.322511] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:00.736 passed 00:07:00.736 Test: blockdev write read 8 blocks ...passed 00:07:00.736 Test: blockdev write read size > 128k ...passed 00:07:00.736 Test: blockdev write read invalid size ...passed 00:07:00.736 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:00.736 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:00.736 Test: blockdev write read max offset ...passed 00:07:00.736 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:00.736 Test: blockdev writev readv 8 blocks ...passed 00:07:00.736 Test: blockdev writev readv 30 x 1block ...passed 00:07:00.736 Test: blockdev writev readv block ...passed 00:07:00.736 Test: blockdev writev readv size > 128k ...passed 00:07:00.736 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:00.736 Test: blockdev comparev and writev ...[2024-11-20 18:16:19.340016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d4838000 len:0x1000 00:07:00.736 [2024-11-20 18:16:19.340057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:00.736 passed 00:07:00.736 Test: blockdev nvme passthru rw ...passed 00:07:00.736 Test: blockdev nvme passthru vendor specific ...[2024-11-20 18:16:19.342499] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:00.736 [2024-11-20 18:16:19.342526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:00.736 passed 00:07:00.736 Test: blockdev nvme admin passthru ...passed 00:07:00.736 Test: blockdev copy ...passed 00:07:00.736 Suite: bdevio tests on: Nvme2n1 00:07:00.736 Test: blockdev write read block ...passed 00:07:00.736 Test: blockdev write zeroes read block ...passed 00:07:00.997 Test: blockdev write zeroes read no split ...passed 00:07:00.997 Test: blockdev write zeroes read split ...passed 00:07:00.997 Test: blockdev write zeroes read split partial ...passed 00:07:00.997 Test: blockdev reset ...[2024-11-20 18:16:19.400547] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:00.997 [2024-11-20 18:16:19.404842] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:00.997 passed 00:07:00.997 Test: blockdev write read 8 blocks ...passed 00:07:00.997 Test: blockdev write read size > 128k ...passed 00:07:00.997 Test: blockdev write read invalid size ...passed 00:07:00.997 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:00.997 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:00.997 Test: blockdev write read max offset ...passed 00:07:00.997 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:00.997 Test: blockdev writev readv 8 blocks ...passed 00:07:00.997 Test: blockdev writev readv 30 x 1block ...passed 00:07:00.997 Test: blockdev writev readv block ...passed 00:07:00.997 Test: blockdev writev readv size > 128k ...passed 00:07:00.997 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:00.997 Test: blockdev comparev and writev ...[2024-11-20 18:16:19.422123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d4834000 len:0x1000 00:07:00.997 [2024-11-20 18:16:19.422163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:00.997 passed 00:07:00.997 Test: blockdev nvme passthru rw ...passed 00:07:00.997 Test: blockdev nvme passthru vendor specific ...passed 00:07:00.997 Test: blockdev nvme admin passthru ...[2024-11-20 18:16:19.424731] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:00.997 [2024-11-20 18:16:19.424764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:00.997 passed 00:07:00.997 Test: blockdev copy ...passed 00:07:00.997 Suite: bdevio tests on: Nvme1n1p2 00:07:00.997 Test: blockdev write read block ...passed 00:07:00.997 Test: blockdev write zeroes read block ...passed 00:07:00.997 Test: blockdev write zeroes read no split ...passed 00:07:00.997 Test: blockdev write zeroes read split ...passed 00:07:00.997 Test: blockdev write zeroes read split partial ...passed 00:07:00.997 Test: blockdev reset ...[2024-11-20 18:16:19.484008] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:00.997 [2024-11-20 18:16:19.487519] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:07:00.997 passed 00:07:00.997 Test: blockdev write read 8 blocks ...passed 00:07:00.997 Test: blockdev write read size > 128k ...passed 00:07:00.997 Test: blockdev write read invalid size ...passed 00:07:00.997 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:00.997 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:00.997 Test: blockdev write read max offset ...passed 00:07:00.997 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:00.997 Test: blockdev writev readv 8 blocks ...passed 00:07:00.997 Test: blockdev writev readv 30 x 1block ...passed 00:07:00.997 Test: blockdev writev readv block ...passed 00:07:00.997 Test: blockdev writev readv size > 128k ...passed 00:07:00.997 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:00.997 Test: blockdev comparev and writev ...[2024-11-20 18:16:19.504404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2d4830000 len:0x1000 00:07:00.997 [2024-11-20 18:16:19.504441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:00.997 passed 00:07:00.997 Test: blockdev nvme passthru rw ...passed 00:07:00.997 Test: blockdev nvme passthru vendor specific ...passed 00:07:00.997 Test: blockdev nvme admin passthru ...passed 00:07:00.997 Test: blockdev copy ...passed 00:07:00.997 Suite: bdevio tests on: Nvme1n1p1 00:07:00.997 Test: blockdev write read block ...passed 00:07:00.997 Test: blockdev write zeroes read block ...passed 00:07:00.997 Test: blockdev write zeroes read no split ...passed 00:07:00.997 Test: blockdev write zeroes read split ...passed 00:07:00.997 Test: blockdev write zeroes read split partial ...passed 00:07:00.997 Test: blockdev reset ...[2024-11-20 18:16:19.555315] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:00.997 [2024-11-20 18:16:19.558935] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:07:00.997 passed 00:07:00.997 Test: blockdev write read 8 blocks ...passed 00:07:00.997 Test: blockdev write read size > 128k ...passed 00:07:00.997 Test: blockdev write read invalid size ...passed 00:07:00.997 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:00.997 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:00.997 Test: blockdev write read max offset ...passed 00:07:00.997 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:00.997 Test: blockdev writev readv 8 blocks ...passed 00:07:00.997 Test: blockdev writev readv 30 x 1block ...passed 00:07:00.997 Test: blockdev writev readv block ...passed 00:07:00.997 Test: blockdev writev readv size > 128k ...passed 00:07:00.997 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:00.997 Test: blockdev comparev and writev ...[2024-11-20 18:16:19.577188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2b180e000 len:0x1000 00:07:00.997 [2024-11-20 18:16:19.577224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:00.997 passed 00:07:00.997 Test: blockdev nvme passthru rw ...passed 00:07:00.997 Test: blockdev nvme passthru vendor specific ...passed 00:07:00.997 Test: blockdev nvme admin passthru ...passed 00:07:00.997 Test: blockdev copy ...passed 00:07:00.997 Suite: bdevio tests on: Nvme0n1 00:07:00.997 Test: blockdev write read block ...passed 00:07:00.997 Test: blockdev write zeroes read block ...passed 00:07:00.997 Test: blockdev write zeroes read no split ...passed 00:07:00.997 Test: blockdev write zeroes read split ...passed 00:07:01.259 Test: blockdev write zeroes read split partial ...passed 00:07:01.259 Test: blockdev reset ...[2024-11-20 18:16:19.628045] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:07:01.259 [2024-11-20 18:16:19.631551] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:07:01.259 passed 00:07:01.259 Test: blockdev write read 8 blocks ...passed 00:07:01.259 Test: blockdev write read size > 128k ...passed 00:07:01.259 Test: blockdev write read invalid size ...passed 00:07:01.259 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:01.259 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:01.259 Test: blockdev write read max offset ...passed 00:07:01.259 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:01.259 Test: blockdev writev readv 8 blocks ...passed 00:07:01.259 Test: blockdev writev readv 30 x 1block ...passed 00:07:01.259 Test: blockdev writev readv block ...passed 00:07:01.259 Test: blockdev writev readv size > 128k ...passed 00:07:01.259 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:01.259 Test: blockdev comparev and writev ...passed 00:07:01.259 Test: blockdev nvme passthru rw ...[2024-11-20 18:16:19.645873] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:01.259 separate metadata which is not supported yet. 00:07:01.259 passed 00:07:01.259 Test: blockdev nvme passthru vendor specific ...[2024-11-20 18:16:19.647271] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:01.259 [2024-11-20 18:16:19.647305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:01.259 passed 00:07:01.259 Test: blockdev nvme admin passthru ...passed 00:07:01.259 Test: blockdev copy ...passed 00:07:01.259 00:07:01.259 Run Summary: Type Total Ran Passed Failed Inactive 00:07:01.259 suites 7 7 n/a 0 0 00:07:01.259 tests 161 161 161 0 0 00:07:01.259 asserts 1025 1025 1025 0 n/a 00:07:01.259 00:07:01.259 Elapsed time = 1.379 seconds 00:07:01.259 0 00:07:01.259 18:16:19 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61295 00:07:01.259 18:16:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 61295 ']' 00:07:01.259 18:16:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 61295 00:07:01.259 18:16:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:07:01.259 18:16:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:01.259 18:16:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61295 00:07:01.259 18:16:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:01.259 18:16:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:01.259 killing process with pid 61295 00:07:01.259 18:16:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61295' 00:07:01.259 18:16:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 61295 00:07:01.259 18:16:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 61295 00:07:01.829 18:16:20 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:01.829 00:07:01.829 real 0m2.231s 00:07:01.829 user 0m5.699s 00:07:01.829 sys 0m0.266s 00:07:01.829 18:16:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:01.829 ************************************ 00:07:01.829 END TEST bdev_bounds 00:07:01.829 ************************************ 00:07:01.829 18:16:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:01.829 18:16:20 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:01.829 18:16:20 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:01.829 18:16:20 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:01.829 18:16:20 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:01.829 ************************************ 00:07:01.829 START TEST bdev_nbd 00:07:01.829 ************************************ 00:07:01.829 18:16:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:01.829 18:16:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:01.829 18:16:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:01.829 18:16:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.829 18:16:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:01.829 18:16:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:01.829 18:16:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:01.829 18:16:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:07:01.829 18:16:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:01.829 18:16:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:01.829 18:16:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:01.829 18:16:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:07:01.829 18:16:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:01.829 18:16:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:01.829 18:16:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:01.829 18:16:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:01.829 18:16:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61349 00:07:01.829 18:16:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:01.829 18:16:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61349 /var/tmp/spdk-nbd.sock 00:07:01.829 18:16:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 61349 ']' 00:07:01.829 18:16:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:01.829 18:16:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:01.829 18:16:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:01.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:01.829 18:16:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:01.829 18:16:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:01.829 18:16:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:02.089 [2024-11-20 18:16:20.470032] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:07:02.089 [2024-11-20 18:16:20.470168] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:02.089 [2024-11-20 18:16:20.630626] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.349 [2024-11-20 18:16:20.729741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.921 18:16:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:02.921 18:16:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:07:02.921 18:16:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:02.921 18:16:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:02.921 18:16:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:02.921 18:16:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:02.921 18:16:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:02.921 18:16:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:02.921 18:16:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:02.921 18:16:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:02.921 18:16:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:02.921 18:16:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:02.921 18:16:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:02.921 18:16:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:02.921 18:16:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:02.921 18:16:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:02.921 18:16:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:02.921 18:16:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:02.921 18:16:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:02.921 18:16:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:02.921 18:16:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:02.921 18:16:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:02.921 18:16:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:02.921 18:16:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:02.921 18:16:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:02.921 18:16:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:02.921 18:16:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:02.921 1+0 records in 00:07:02.921 1+0 records out 00:07:02.921 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000841405 s, 4.9 MB/s 00:07:02.921 18:16:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:03.182 18:16:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:03.182 18:16:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:03.182 18:16:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:03.182 18:16:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:03.182 18:16:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:03.182 18:16:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:03.182 18:16:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:07:03.182 18:16:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:03.182 18:16:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:03.182 18:16:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:03.182 18:16:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:03.182 18:16:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:03.182 18:16:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:03.182 18:16:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:03.182 18:16:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:03.182 18:16:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:03.182 18:16:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:03.182 18:16:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:03.182 18:16:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:03.182 1+0 records in 00:07:03.182 1+0 records out 00:07:03.182 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00109617 s, 3.7 MB/s 00:07:03.182 18:16:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:03.182 18:16:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:03.182 18:16:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:03.182 18:16:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:03.182 18:16:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:03.182 18:16:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:03.182 18:16:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:03.182 18:16:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:07:03.444 18:16:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:03.444 18:16:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:03.444 18:16:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:03.444 18:16:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:07:03.444 18:16:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:03.444 18:16:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:03.444 18:16:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:03.444 18:16:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:07:03.444 18:16:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:03.444 18:16:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:03.444 18:16:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:03.444 18:16:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:03.444 1+0 records in 00:07:03.444 1+0 records out 00:07:03.444 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00100682 s, 4.1 MB/s 00:07:03.444 18:16:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:03.444 18:16:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:03.444 18:16:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:03.444 18:16:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:03.444 18:16:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:03.444 18:16:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:03.444 18:16:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:03.444 18:16:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:03.705 18:16:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:03.705 18:16:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:03.705 18:16:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:03.705 18:16:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:07:03.705 18:16:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:03.705 18:16:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:03.705 18:16:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:03.705 18:16:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:07:03.705 18:16:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:03.705 18:16:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:03.705 18:16:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:03.705 18:16:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:03.705 1+0 records in 00:07:03.705 1+0 records out 00:07:03.705 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0010013 s, 4.1 MB/s 00:07:03.705 18:16:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:03.705 18:16:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:03.705 18:16:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:03.705 18:16:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:03.705 18:16:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:03.705 18:16:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:03.705 18:16:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:03.705 18:16:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:03.966 18:16:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:03.966 18:16:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:03.966 18:16:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:03.966 18:16:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:07:03.966 18:16:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:03.966 18:16:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:03.966 18:16:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:03.966 18:16:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:07:03.966 18:16:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:03.966 18:16:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:03.966 18:16:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:03.966 18:16:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:03.966 1+0 records in 00:07:03.966 1+0 records out 00:07:03.966 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000836394 s, 4.9 MB/s 00:07:03.966 18:16:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:03.966 18:16:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:03.966 18:16:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:03.966 18:16:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:03.966 18:16:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:03.966 18:16:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:03.966 18:16:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:03.966 18:16:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:04.227 18:16:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:04.227 18:16:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:04.227 18:16:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:04.227 18:16:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:07:04.227 18:16:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:04.227 18:16:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:04.227 18:16:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:04.227 18:16:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:07:04.227 18:16:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:04.227 18:16:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:04.227 18:16:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:04.227 18:16:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:04.227 1+0 records in 00:07:04.227 1+0 records out 00:07:04.227 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000796717 s, 5.1 MB/s 00:07:04.227 18:16:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.227 18:16:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:04.227 18:16:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.227 18:16:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:04.227 18:16:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:04.227 18:16:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:04.227 18:16:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:04.227 18:16:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:04.488 18:16:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:07:04.488 18:16:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:07:04.488 18:16:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:07:04.488 18:16:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:07:04.488 18:16:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:04.488 18:16:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:04.488 18:16:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:04.488 18:16:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:07:04.488 18:16:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:04.488 18:16:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:04.488 18:16:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:04.488 18:16:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:04.488 1+0 records in 00:07:04.488 1+0 records out 00:07:04.488 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0010742 s, 3.8 MB/s 00:07:04.488 18:16:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.488 18:16:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:04.488 18:16:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.488 18:16:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:04.488 18:16:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:04.488 18:16:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:04.488 18:16:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:04.488 18:16:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:04.747 18:16:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:04.747 { 00:07:04.747 "nbd_device": "/dev/nbd0", 00:07:04.747 "bdev_name": "Nvme0n1" 00:07:04.747 }, 00:07:04.747 { 00:07:04.747 "nbd_device": "/dev/nbd1", 00:07:04.747 "bdev_name": "Nvme1n1p1" 00:07:04.747 }, 00:07:04.747 { 00:07:04.747 "nbd_device": "/dev/nbd2", 00:07:04.747 "bdev_name": "Nvme1n1p2" 00:07:04.747 }, 00:07:04.747 { 00:07:04.747 "nbd_device": "/dev/nbd3", 00:07:04.747 "bdev_name": "Nvme2n1" 00:07:04.747 }, 00:07:04.747 { 00:07:04.747 "nbd_device": "/dev/nbd4", 00:07:04.747 "bdev_name": "Nvme2n2" 00:07:04.747 }, 00:07:04.747 { 00:07:04.747 "nbd_device": "/dev/nbd5", 00:07:04.747 "bdev_name": "Nvme2n3" 00:07:04.747 }, 00:07:04.747 { 00:07:04.747 "nbd_device": "/dev/nbd6", 00:07:04.747 "bdev_name": "Nvme3n1" 00:07:04.747 } 00:07:04.747 ]' 00:07:04.747 18:16:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:04.747 18:16:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:04.747 { 00:07:04.747 "nbd_device": "/dev/nbd0", 00:07:04.747 "bdev_name": "Nvme0n1" 00:07:04.747 }, 00:07:04.747 { 00:07:04.747 "nbd_device": "/dev/nbd1", 00:07:04.747 "bdev_name": "Nvme1n1p1" 00:07:04.747 }, 00:07:04.747 { 00:07:04.747 "nbd_device": "/dev/nbd2", 00:07:04.747 "bdev_name": "Nvme1n1p2" 00:07:04.747 }, 00:07:04.747 { 00:07:04.747 "nbd_device": "/dev/nbd3", 00:07:04.747 "bdev_name": "Nvme2n1" 00:07:04.747 }, 00:07:04.747 { 00:07:04.747 "nbd_device": "/dev/nbd4", 00:07:04.747 "bdev_name": "Nvme2n2" 00:07:04.747 }, 00:07:04.748 { 00:07:04.748 "nbd_device": "/dev/nbd5", 00:07:04.748 "bdev_name": "Nvme2n3" 00:07:04.748 }, 00:07:04.748 { 00:07:04.748 "nbd_device": "/dev/nbd6", 00:07:04.748 "bdev_name": "Nvme3n1" 00:07:04.748 } 00:07:04.748 ]' 00:07:04.748 18:16:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:04.748 18:16:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:07:04.748 18:16:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.748 18:16:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:07:04.748 18:16:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:04.748 18:16:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:04.748 18:16:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:04.748 18:16:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:05.006 18:16:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:05.006 18:16:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:05.006 18:16:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:05.006 18:16:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:05.006 18:16:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:05.006 18:16:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:05.006 18:16:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:05.006 18:16:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:05.006 18:16:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:05.006 18:16:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:05.006 18:16:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:05.006 18:16:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:05.006 18:16:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:05.006 18:16:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:05.006 18:16:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:05.006 18:16:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:05.006 18:16:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:05.006 18:16:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:05.006 18:16:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:05.006 18:16:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:05.265 18:16:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:05.265 18:16:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:05.265 18:16:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:05.265 18:16:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:05.265 18:16:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:05.265 18:16:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:05.265 18:16:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:05.265 18:16:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:05.265 18:16:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:05.265 18:16:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:05.523 18:16:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:05.523 18:16:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:05.523 18:16:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:05.523 18:16:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:05.523 18:16:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:05.523 18:16:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:05.523 18:16:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:05.523 18:16:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:05.523 18:16:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:05.523 18:16:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:05.781 18:16:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:05.781 18:16:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:05.781 18:16:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:05.781 18:16:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:05.781 18:16:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:05.781 18:16:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:05.781 18:16:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:05.781 18:16:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:05.781 18:16:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:05.781 18:16:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:06.040 18:16:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:06.040 18:16:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:06.040 18:16:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:06.040 18:16:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:06.040 18:16:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:06.040 18:16:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:06.040 18:16:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:06.040 18:16:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:06.040 18:16:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:06.040 18:16:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:07:06.040 18:16:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:07:06.040 18:16:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:07:06.040 18:16:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:07:06.040 18:16:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:06.040 18:16:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:06.040 18:16:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:07:06.040 18:16:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:06.040 18:16:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:06.040 18:16:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:06.040 18:16:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:06.040 18:16:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:06.298 18:16:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:06.298 18:16:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:06.298 18:16:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:06.298 18:16:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:06.298 18:16:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:06.298 18:16:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:06.298 18:16:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:06.298 18:16:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:06.298 18:16:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:06.298 18:16:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:06.298 18:16:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:06.298 18:16:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:06.298 18:16:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:06.298 18:16:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:06.298 18:16:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:06.298 18:16:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:06.298 18:16:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:06.298 18:16:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:06.298 18:16:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:06.298 18:16:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:06.298 18:16:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:06.298 18:16:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:06.298 18:16:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:06.298 18:16:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:06.298 18:16:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:06.298 18:16:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:06.298 18:16:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:06.298 18:16:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:06.556 /dev/nbd0 00:07:06.557 18:16:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:06.557 18:16:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:06.557 18:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:06.557 18:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:06.557 18:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:06.557 18:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:06.557 18:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:06.557 18:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:06.557 18:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:06.557 18:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:06.557 18:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:06.557 1+0 records in 00:07:06.557 1+0 records out 00:07:06.557 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00066763 s, 6.1 MB/s 00:07:06.557 18:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:06.557 18:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:06.557 18:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:06.557 18:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:06.557 18:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:06.557 18:16:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:06.557 18:16:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:06.557 18:16:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:07:06.815 /dev/nbd1 00:07:06.815 18:16:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:06.815 18:16:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:06.815 18:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:06.815 18:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:06.815 18:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:06.815 18:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:06.815 18:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:06.815 18:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:06.815 18:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:06.815 18:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:06.815 18:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:06.815 1+0 records in 00:07:06.815 1+0 records out 00:07:06.815 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000501769 s, 8.2 MB/s 00:07:06.815 18:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:06.815 18:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:06.815 18:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:06.815 18:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:06.815 18:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:06.815 18:16:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:06.815 18:16:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:06.815 18:16:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:07:07.074 /dev/nbd10 00:07:07.074 18:16:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:07.074 18:16:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:07.074 18:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:07:07.074 18:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:07.074 18:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:07.074 18:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:07.074 18:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:07:07.074 18:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:07.074 18:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:07.074 18:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:07.074 18:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:07.074 1+0 records in 00:07:07.074 1+0 records out 00:07:07.074 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000525956 s, 7.8 MB/s 00:07:07.074 18:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:07.074 18:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:07.074 18:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:07.074 18:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:07.074 18:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:07.074 18:16:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:07.074 18:16:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:07.074 18:16:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:07:07.074 /dev/nbd11 00:07:07.332 18:16:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:07.332 18:16:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:07.332 18:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:07:07.332 18:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:07.332 18:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:07.332 18:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:07.332 18:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:07:07.332 18:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:07.332 18:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:07.332 18:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:07.332 18:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:07.332 1+0 records in 00:07:07.332 1+0 records out 00:07:07.332 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000247014 s, 16.6 MB/s 00:07:07.332 18:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:07.332 18:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:07.332 18:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:07.332 18:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:07.332 18:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:07.333 18:16:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:07.333 18:16:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:07.333 18:16:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:07:07.333 /dev/nbd12 00:07:07.333 18:16:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:07.333 18:16:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:07.333 18:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:07:07.333 18:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:07.333 18:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:07.333 18:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:07.333 18:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:07:07.333 18:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:07.333 18:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:07.333 18:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:07.333 18:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:07.333 1+0 records in 00:07:07.333 1+0 records out 00:07:07.333 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000511158 s, 8.0 MB/s 00:07:07.333 18:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:07.333 18:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:07.333 18:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:07.333 18:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:07.333 18:16:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:07.333 18:16:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:07.333 18:16:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:07.333 18:16:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:07:07.591 /dev/nbd13 00:07:07.591 18:16:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:07.591 18:16:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:07.591 18:16:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:07:07.591 18:16:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:07.591 18:16:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:07.591 18:16:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:07.591 18:16:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:07:07.591 18:16:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:07.591 18:16:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:07.591 18:16:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:07.591 18:16:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:07.591 1+0 records in 00:07:07.591 1+0 records out 00:07:07.591 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000338954 s, 12.1 MB/s 00:07:07.591 18:16:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:07.591 18:16:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:07.591 18:16:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:07.591 18:16:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:07.591 18:16:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:07.591 18:16:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:07.591 18:16:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:07.591 18:16:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:07:07.849 /dev/nbd14 00:07:07.850 18:16:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:07:07.850 18:16:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:07:07.850 18:16:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:07:07.850 18:16:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:07.850 18:16:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:07.850 18:16:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:07.850 18:16:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:07:07.850 18:16:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:07.850 18:16:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:07.850 18:16:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:07.850 18:16:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:07.850 1+0 records in 00:07:07.850 1+0 records out 00:07:07.850 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000383828 s, 10.7 MB/s 00:07:07.850 18:16:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:07.850 18:16:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:07.850 18:16:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:07.850 18:16:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:07.850 18:16:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:07.850 18:16:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:07.850 18:16:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:07.850 18:16:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:07.850 18:16:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:07.850 18:16:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:08.108 18:16:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:08.108 { 00:07:08.108 "nbd_device": "/dev/nbd0", 00:07:08.108 "bdev_name": "Nvme0n1" 00:07:08.108 }, 00:07:08.108 { 00:07:08.108 "nbd_device": "/dev/nbd1", 00:07:08.108 "bdev_name": "Nvme1n1p1" 00:07:08.108 }, 00:07:08.108 { 00:07:08.108 "nbd_device": "/dev/nbd10", 00:07:08.108 "bdev_name": "Nvme1n1p2" 00:07:08.108 }, 00:07:08.108 { 00:07:08.108 "nbd_device": "/dev/nbd11", 00:07:08.108 "bdev_name": "Nvme2n1" 00:07:08.108 }, 00:07:08.108 { 00:07:08.108 "nbd_device": "/dev/nbd12", 00:07:08.108 "bdev_name": "Nvme2n2" 00:07:08.108 }, 00:07:08.108 { 00:07:08.108 "nbd_device": "/dev/nbd13", 00:07:08.108 "bdev_name": "Nvme2n3" 00:07:08.108 }, 00:07:08.108 { 00:07:08.108 "nbd_device": "/dev/nbd14", 00:07:08.108 "bdev_name": "Nvme3n1" 00:07:08.108 } 00:07:08.108 ]' 00:07:08.108 18:16:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:08.108 { 00:07:08.108 "nbd_device": "/dev/nbd0", 00:07:08.108 "bdev_name": "Nvme0n1" 00:07:08.108 }, 00:07:08.108 { 00:07:08.108 "nbd_device": "/dev/nbd1", 00:07:08.108 "bdev_name": "Nvme1n1p1" 00:07:08.108 }, 00:07:08.108 { 00:07:08.108 "nbd_device": "/dev/nbd10", 00:07:08.108 "bdev_name": "Nvme1n1p2" 00:07:08.108 }, 00:07:08.108 { 00:07:08.108 "nbd_device": "/dev/nbd11", 00:07:08.108 "bdev_name": "Nvme2n1" 00:07:08.108 }, 00:07:08.108 { 00:07:08.108 "nbd_device": "/dev/nbd12", 00:07:08.108 "bdev_name": "Nvme2n2" 00:07:08.108 }, 00:07:08.108 { 00:07:08.108 "nbd_device": "/dev/nbd13", 00:07:08.108 "bdev_name": "Nvme2n3" 00:07:08.108 }, 00:07:08.108 { 00:07:08.108 "nbd_device": "/dev/nbd14", 00:07:08.108 "bdev_name": "Nvme3n1" 00:07:08.108 } 00:07:08.108 ]' 00:07:08.108 18:16:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:08.108 18:16:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:08.108 /dev/nbd1 00:07:08.108 /dev/nbd10 00:07:08.108 /dev/nbd11 00:07:08.108 /dev/nbd12 00:07:08.108 /dev/nbd13 00:07:08.108 /dev/nbd14' 00:07:08.108 18:16:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:08.108 18:16:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:08.108 /dev/nbd1 00:07:08.108 /dev/nbd10 00:07:08.108 /dev/nbd11 00:07:08.108 /dev/nbd12 00:07:08.108 /dev/nbd13 00:07:08.108 /dev/nbd14' 00:07:08.108 18:16:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:07:08.108 18:16:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:07:08.108 18:16:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:07:08.108 18:16:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:07:08.108 18:16:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:07:08.108 18:16:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:08.108 18:16:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:08.108 18:16:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:08.108 18:16:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:08.108 18:16:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:08.108 18:16:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:08.108 256+0 records in 00:07:08.108 256+0 records out 00:07:08.108 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00767775 s, 137 MB/s 00:07:08.108 18:16:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:08.108 18:16:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:08.108 256+0 records in 00:07:08.108 256+0 records out 00:07:08.109 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.076329 s, 13.7 MB/s 00:07:08.109 18:16:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:08.109 18:16:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:08.109 256+0 records in 00:07:08.109 256+0 records out 00:07:08.109 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0902702 s, 11.6 MB/s 00:07:08.109 18:16:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:08.109 18:16:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:08.366 256+0 records in 00:07:08.366 256+0 records out 00:07:08.366 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0887517 s, 11.8 MB/s 00:07:08.366 18:16:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:08.366 18:16:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:08.366 256+0 records in 00:07:08.366 256+0 records out 00:07:08.366 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0787818 s, 13.3 MB/s 00:07:08.366 18:16:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:08.366 18:16:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:08.366 256+0 records in 00:07:08.366 256+0 records out 00:07:08.366 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0759468 s, 13.8 MB/s 00:07:08.366 18:16:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:08.366 18:16:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:08.625 256+0 records in 00:07:08.625 256+0 records out 00:07:08.625 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0751309 s, 14.0 MB/s 00:07:08.625 18:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:08.625 18:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:07:08.625 256+0 records in 00:07:08.625 256+0 records out 00:07:08.625 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0779626 s, 13.4 MB/s 00:07:08.625 18:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:07:08.625 18:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:08.625 18:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:08.625 18:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:08.625 18:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:08.625 18:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:08.625 18:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:08.625 18:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:08.625 18:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:08.625 18:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:08.625 18:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:08.625 18:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:08.625 18:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:08.625 18:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:08.625 18:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:08.625 18:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:08.625 18:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:08.625 18:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:08.625 18:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:08.625 18:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:08.625 18:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:07:08.625 18:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:08.625 18:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:08.625 18:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:08.625 18:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:08.626 18:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:08.626 18:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:08.626 18:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:08.626 18:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:08.884 18:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:08.884 18:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:08.884 18:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:08.884 18:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:08.884 18:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:08.884 18:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:08.884 18:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:08.884 18:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:08.884 18:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:08.884 18:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:09.143 18:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:09.143 18:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:09.143 18:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:09.143 18:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:09.143 18:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:09.143 18:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:09.143 18:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:09.143 18:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:09.143 18:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:09.143 18:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:09.143 18:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:09.401 18:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:09.401 18:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:09.401 18:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:09.401 18:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:09.401 18:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:09.401 18:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:09.401 18:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:09.401 18:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:09.401 18:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:09.401 18:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:09.401 18:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:09.401 18:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:09.401 18:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:09.401 18:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:09.401 18:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:09.401 18:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:09.401 18:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:09.401 18:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:09.401 18:16:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:09.659 18:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:09.659 18:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:09.659 18:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:09.659 18:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:09.659 18:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:09.659 18:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:09.659 18:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:09.659 18:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:09.659 18:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:09.659 18:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:09.917 18:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:09.917 18:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:09.917 18:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:09.917 18:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:09.917 18:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:09.917 18:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:09.917 18:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:09.917 18:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:09.917 18:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:09.917 18:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:07:10.175 18:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:07:10.175 18:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:07:10.175 18:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:07:10.175 18:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:10.175 18:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:10.175 18:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:07:10.175 18:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:10.175 18:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:10.175 18:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:10.175 18:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:10.175 18:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:10.175 18:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:10.175 18:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:10.175 18:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:10.175 18:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:10.175 18:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:10.175 18:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:10.175 18:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:10.175 18:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:10.175 18:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:10.175 18:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:10.175 18:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:10.175 18:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:10.175 18:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:10.175 18:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:10.175 18:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:10.175 18:16:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:10.433 malloc_lvol_verify 00:07:10.433 18:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:10.691 c9489d52-bb98-436a-b6e3-17a6e5efaf65 00:07:10.691 18:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:10.949 f0200ab6-e065-4986-9fc0-0651e7192b82 00:07:10.949 18:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:11.207 /dev/nbd0 00:07:11.207 18:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:11.207 18:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:11.208 18:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:11.208 18:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:11.208 18:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:11.208 mke2fs 1.47.0 (5-Feb-2023) 00:07:11.208 Discarding device blocks: 0/4096 done 00:07:11.208 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:11.208 00:07:11.208 Allocating group tables: 0/1 done 00:07:11.208 Writing inode tables: 0/1 done 00:07:11.208 Creating journal (1024 blocks): done 00:07:11.208 Writing superblocks and filesystem accounting information: 0/1 done 00:07:11.208 00:07:11.208 18:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:11.208 18:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:11.208 18:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:11.208 18:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:11.208 18:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:11.208 18:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:11.208 18:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:11.208 18:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:11.208 18:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:11.208 18:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:11.208 18:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:11.208 18:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:11.208 18:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:11.208 18:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:11.208 18:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:11.208 18:16:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61349 00:07:11.208 18:16:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 61349 ']' 00:07:11.208 18:16:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 61349 00:07:11.208 18:16:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:07:11.208 18:16:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:11.208 18:16:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61349 00:07:11.466 killing process with pid 61349 00:07:11.466 18:16:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:11.466 18:16:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:11.466 18:16:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61349' 00:07:11.466 18:16:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 61349 00:07:11.466 18:16:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 61349 00:07:12.033 18:16:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:12.033 00:07:12.033 real 0m10.047s 00:07:12.033 user 0m14.384s 00:07:12.033 sys 0m3.327s 00:07:12.033 18:16:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:12.033 18:16:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:12.033 ************************************ 00:07:12.033 END TEST bdev_nbd 00:07:12.033 ************************************ 00:07:12.033 18:16:30 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:07:12.033 18:16:30 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:07:12.033 18:16:30 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:07:12.033 skipping fio tests on NVMe due to multi-ns failures. 00:07:12.033 18:16:30 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:12.033 18:16:30 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:12.033 18:16:30 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:12.033 18:16:30 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:12.033 18:16:30 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:12.033 18:16:30 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:12.034 ************************************ 00:07:12.034 START TEST bdev_verify 00:07:12.034 ************************************ 00:07:12.034 18:16:30 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:12.034 [2024-11-20 18:16:30.547849] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:07:12.034 [2024-11-20 18:16:30.547962] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61757 ] 00:07:12.292 [2024-11-20 18:16:30.701564] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:12.292 [2024-11-20 18:16:30.778223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:12.292 [2024-11-20 18:16:30.778321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.864 Running I/O for 5 seconds... 00:07:15.191 24256.00 IOPS, 94.75 MiB/s [2024-11-20T18:16:34.759Z] 24544.00 IOPS, 95.88 MiB/s [2024-11-20T18:16:35.695Z] 23914.67 IOPS, 93.42 MiB/s [2024-11-20T18:16:36.633Z] 23664.00 IOPS, 92.44 MiB/s [2024-11-20T18:16:36.633Z] 22937.60 IOPS, 89.60 MiB/s 00:07:18.004 Latency(us) 00:07:18.004 [2024-11-20T18:16:36.633Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:18.004 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:18.004 Verification LBA range: start 0x0 length 0xbd0bd 00:07:18.004 Nvme0n1 : 5.06 1670.92 6.53 0.00 0.00 76392.68 12905.55 75013.51 00:07:18.004 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:18.004 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:18.004 Nvme0n1 : 5.08 1561.69 6.10 0.00 0.00 81782.34 13812.97 93565.24 00:07:18.004 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:18.004 Verification LBA range: start 0x0 length 0x4ff80 00:07:18.004 Nvme1n1p1 : 5.06 1670.29 6.52 0.00 0.00 76305.27 14720.39 70980.53 00:07:18.004 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:18.004 Verification LBA range: start 0x4ff80 length 0x4ff80 00:07:18.004 Nvme1n1p1 : 5.08 1560.75 6.10 0.00 0.00 81521.83 16131.94 77836.60 00:07:18.004 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:18.004 Verification LBA range: start 0x0 length 0x4ff7f 00:07:18.004 Nvme1n1p2 : 5.06 1669.79 6.52 0.00 0.00 76237.16 16232.76 71787.13 00:07:18.004 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:18.004 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:07:18.004 Nvme1n1p2 : 5.09 1560.31 6.09 0.00 0.00 81375.23 17039.36 72593.72 00:07:18.004 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:18.004 Verification LBA range: start 0x0 length 0x80000 00:07:18.004 Nvme2n1 : 5.06 1669.34 6.52 0.00 0.00 76113.74 17644.31 72190.42 00:07:18.004 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:18.004 Verification LBA range: start 0x80000 length 0x80000 00:07:18.004 Nvme2n1 : 5.09 1559.91 6.09 0.00 0.00 81236.29 17039.36 70173.93 00:07:18.004 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:18.004 Verification LBA range: start 0x0 length 0x80000 00:07:18.004 Nvme2n2 : 5.06 1668.87 6.52 0.00 0.00 75958.12 16636.06 70980.53 00:07:18.004 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:18.004 Verification LBA range: start 0x80000 length 0x80000 00:07:18.004 Nvme2n2 : 5.09 1559.51 6.09 0.00 0.00 81083.30 16736.89 72593.72 00:07:18.004 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:18.005 Verification LBA range: start 0x0 length 0x80000 00:07:18.005 Nvme2n3 : 5.08 1675.50 6.54 0.00 0.00 75506.95 8771.74 71383.83 00:07:18.005 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:18.005 Verification LBA range: start 0x80000 length 0x80000 00:07:18.005 Nvme2n3 : 5.09 1559.08 6.09 0.00 0.00 80967.44 16434.41 71787.13 00:07:18.005 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:18.005 Verification LBA range: start 0x0 length 0x20000 00:07:18.005 Nvme3n1 : 5.09 1684.18 6.58 0.00 0.00 75091.50 9477.51 73400.32 00:07:18.005 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:18.005 Verification LBA range: start 0x20000 length 0x20000 00:07:18.005 Nvme3n1 : 5.09 1558.65 6.09 0.00 0.00 80912.72 13812.97 72997.02 00:07:18.005 [2024-11-20T18:16:36.634Z] =================================================================================================================== 00:07:18.005 [2024-11-20T18:16:36.634Z] Total : 22628.78 88.39 0.00 0.00 78517.34 8771.74 93565.24 00:07:19.391 00:07:19.391 real 0m7.145s 00:07:19.392 user 0m13.435s 00:07:19.392 sys 0m0.201s 00:07:19.392 18:16:37 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:19.392 ************************************ 00:07:19.392 END TEST bdev_verify 00:07:19.392 ************************************ 00:07:19.392 18:16:37 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:19.392 18:16:37 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:19.392 18:16:37 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:19.392 18:16:37 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:19.392 18:16:37 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:19.392 ************************************ 00:07:19.392 START TEST bdev_verify_big_io 00:07:19.392 ************************************ 00:07:19.392 18:16:37 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:19.392 [2024-11-20 18:16:37.763346] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:07:19.392 [2024-11-20 18:16:37.763459] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61855 ] 00:07:19.392 [2024-11-20 18:16:37.922280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:19.653 [2024-11-20 18:16:38.017922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:19.653 [2024-11-20 18:16:38.018006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.227 Running I/O for 5 seconds... 00:07:25.866 1334.00 IOPS, 83.38 MiB/s [2024-11-20T18:16:45.065Z] 2793.50 IOPS, 174.59 MiB/s [2024-11-20T18:16:45.065Z] 3341.33 IOPS, 208.83 MiB/s 00:07:26.436 Latency(us) 00:07:26.436 [2024-11-20T18:16:45.065Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:26.436 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:26.436 Verification LBA range: start 0x0 length 0xbd0b 00:07:26.436 Nvme0n1 : 5.89 109.78 6.86 0.00 0.00 1108895.61 10687.41 1387346.71 00:07:26.436 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:26.436 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:26.436 Nvme0n1 : 5.85 111.48 6.97 0.00 0.00 1081598.23 28634.19 1245385.65 00:07:26.436 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:26.436 Verification LBA range: start 0x0 length 0x4ff8 00:07:26.436 Nvme1n1p1 : 5.89 107.72 6.73 0.00 0.00 1083581.99 102841.11 1161499.57 00:07:26.436 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:26.436 Verification LBA range: start 0x4ff8 length 0x4ff8 00:07:26.436 Nvme1n1p1 : 5.85 113.32 7.08 0.00 0.00 1040618.31 101631.21 1058255.16 00:07:26.436 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:26.436 Verification LBA range: start 0x0 length 0x4ff7 00:07:26.436 Nvme1n1p2 : 6.02 103.81 6.49 0.00 0.00 1080673.40 84692.68 1987454.82 00:07:26.436 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:26.436 Verification LBA range: start 0x4ff7 length 0x4ff7 00:07:26.436 Nvme1n1p2 : 5.93 118.25 7.39 0.00 0.00 983648.40 75013.51 987274.63 00:07:26.436 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:26.436 Verification LBA range: start 0x0 length 0x8000 00:07:26.436 Nvme2n1 : 6.10 108.36 6.77 0.00 0.00 1005261.99 113730.17 2026171.47 00:07:26.436 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:26.436 Verification LBA range: start 0x8000 length 0x8000 00:07:26.436 Nvme2n1 : 6.09 121.20 7.57 0.00 0.00 926330.66 88725.66 1032444.06 00:07:26.436 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:26.436 Verification LBA range: start 0x0 length 0x8000 00:07:26.436 Nvme2n2 : 6.15 122.40 7.65 0.00 0.00 868791.79 36901.81 1464780.01 00:07:26.436 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:26.436 Verification LBA range: start 0x8000 length 0x8000 00:07:26.436 Nvme2n2 : 6.02 122.02 7.63 0.00 0.00 898950.08 75820.11 1038896.84 00:07:26.436 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:26.436 Verification LBA range: start 0x0 length 0x8000 00:07:26.436 Nvme2n3 : 6.19 126.43 7.90 0.00 0.00 813859.07 21979.77 1819682.66 00:07:26.436 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:26.436 Verification LBA range: start 0x8000 length 0x8000 00:07:26.436 Nvme2n3 : 6.13 129.57 8.10 0.00 0.00 824937.35 36296.86 929199.66 00:07:26.436 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:26.436 Verification LBA range: start 0x0 length 0x2000 00:07:26.436 Nvme3n1 : 6.26 163.37 10.21 0.00 0.00 620496.29 297.75 1871304.86 00:07:26.436 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:26.436 Verification LBA range: start 0x2000 length 0x2000 00:07:26.436 Nvme3n1 : 6.15 140.75 8.80 0.00 0.00 738807.59 5494.94 948557.98 00:07:26.436 [2024-11-20T18:16:45.065Z] =================================================================================================================== 00:07:26.436 [2024-11-20T18:16:45.065Z] Total : 1698.46 106.15 0.00 0.00 914340.42 297.75 2026171.47 00:07:28.982 00:07:28.982 real 0m9.383s 00:07:28.982 user 0m17.864s 00:07:28.982 sys 0m0.217s 00:07:28.982 18:16:47 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:28.982 ************************************ 00:07:28.982 END TEST bdev_verify_big_io 00:07:28.982 ************************************ 00:07:28.982 18:16:47 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:28.982 18:16:47 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:28.982 18:16:47 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:28.982 18:16:47 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:28.982 18:16:47 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:28.982 ************************************ 00:07:28.982 START TEST bdev_write_zeroes 00:07:28.982 ************************************ 00:07:28.982 18:16:47 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:28.982 [2024-11-20 18:16:47.209537] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:07:28.982 [2024-11-20 18:16:47.209660] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61964 ] 00:07:28.982 [2024-11-20 18:16:47.369677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.982 [2024-11-20 18:16:47.468501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.551 Running I/O for 1 seconds... 00:07:30.489 61824.00 IOPS, 241.50 MiB/s 00:07:30.489 Latency(us) 00:07:30.489 [2024-11-20T18:16:49.118Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:30.489 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:30.489 Nvme0n1 : 1.02 8830.43 34.49 0.00 0.00 14462.42 6351.95 30650.68 00:07:30.489 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:30.489 Nvme1n1p1 : 1.02 8819.52 34.45 0.00 0.00 14459.72 10687.41 23996.26 00:07:30.489 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:30.489 Nvme1n1p2 : 1.02 8808.66 34.41 0.00 0.00 14434.92 10687.41 22786.36 00:07:30.489 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:30.489 Nvme2n1 : 1.03 8798.68 34.37 0.00 0.00 14375.76 9124.63 22181.42 00:07:30.489 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:30.489 Nvme2n2 : 1.03 8788.79 34.33 0.00 0.00 14372.87 8973.39 21778.12 00:07:30.489 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:30.489 Nvme2n3 : 1.03 8778.85 34.29 0.00 0.00 14367.16 8721.33 22786.36 00:07:30.489 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:30.489 Nvme3n1 : 1.03 8768.95 34.25 0.00 0.00 14358.66 8015.56 24298.73 00:07:30.489 [2024-11-20T18:16:49.118Z] =================================================================================================================== 00:07:30.489 [2024-11-20T18:16:49.118Z] Total : 61593.88 240.60 0.00 0.00 14404.50 6351.95 30650.68 00:07:31.431 00:07:31.431 real 0m2.668s 00:07:31.431 user 0m2.370s 00:07:31.431 sys 0m0.184s 00:07:31.431 18:16:49 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:31.431 ************************************ 00:07:31.431 END TEST bdev_write_zeroes 00:07:31.431 ************************************ 00:07:31.431 18:16:49 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:31.431 18:16:49 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:31.431 18:16:49 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:31.431 18:16:49 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:31.431 18:16:49 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:31.431 ************************************ 00:07:31.431 START TEST bdev_json_nonenclosed 00:07:31.431 ************************************ 00:07:31.431 18:16:49 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:31.431 [2024-11-20 18:16:49.940869] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:07:31.431 [2024-11-20 18:16:49.940984] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62017 ] 00:07:31.692 [2024-11-20 18:16:50.101759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.692 [2024-11-20 18:16:50.196637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.692 [2024-11-20 18:16:50.196713] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:31.692 [2024-11-20 18:16:50.196729] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:31.692 [2024-11-20 18:16:50.196738] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:31.952 00:07:31.952 real 0m0.496s 00:07:31.952 user 0m0.288s 00:07:31.952 sys 0m0.105s 00:07:31.952 18:16:50 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:31.952 ************************************ 00:07:31.953 END TEST bdev_json_nonenclosed 00:07:31.953 ************************************ 00:07:31.953 18:16:50 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:31.953 18:16:50 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:31.953 18:16:50 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:31.953 18:16:50 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:31.953 18:16:50 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:31.953 ************************************ 00:07:31.953 START TEST bdev_json_nonarray 00:07:31.953 ************************************ 00:07:31.953 18:16:50 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:31.953 [2024-11-20 18:16:50.499341] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:07:31.953 [2024-11-20 18:16:50.499449] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62037 ] 00:07:32.211 [2024-11-20 18:16:50.659047] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.211 [2024-11-20 18:16:50.752861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.211 [2024-11-20 18:16:50.752941] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:32.211 [2024-11-20 18:16:50.752958] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:32.211 [2024-11-20 18:16:50.752967] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:32.471 00:07:32.471 real 0m0.495s 00:07:32.471 user 0m0.294s 00:07:32.471 sys 0m0.097s 00:07:32.471 18:16:50 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:32.471 ************************************ 00:07:32.471 END TEST bdev_json_nonarray 00:07:32.471 ************************************ 00:07:32.471 18:16:50 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:32.471 18:16:50 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:07:32.471 18:16:50 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:07:32.471 18:16:50 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:07:32.471 18:16:50 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:32.471 18:16:50 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:32.471 18:16:50 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:32.471 ************************************ 00:07:32.471 START TEST bdev_gpt_uuid 00:07:32.471 ************************************ 00:07:32.471 18:16:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:07:32.471 18:16:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:07:32.471 18:16:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:07:32.471 18:16:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62068 00:07:32.471 18:16:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:32.471 18:16:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 62068 00:07:32.471 18:16:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 62068 ']' 00:07:32.471 18:16:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.471 18:16:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:32.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.471 18:16:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.471 18:16:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:32.471 18:16:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:32.471 18:16:50 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:32.471 [2024-11-20 18:16:51.066382] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:07:32.471 [2024-11-20 18:16:51.066501] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62068 ] 00:07:32.732 [2024-11-20 18:16:51.224193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.732 [2024-11-20 18:16:51.318506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.302 18:16:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:33.302 18:16:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:07:33.302 18:16:51 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:33.302 18:16:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.302 18:16:51 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:33.874 Some configs were skipped because the RPC state that can call them passed over. 00:07:33.874 18:16:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.874 18:16:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:07:33.874 18:16:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.874 18:16:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:33.874 18:16:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.874 18:16:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:07:33.874 18:16:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.874 18:16:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:33.874 18:16:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.874 18:16:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:07:33.874 { 00:07:33.874 "name": "Nvme1n1p1", 00:07:33.874 "aliases": [ 00:07:33.874 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:07:33.874 ], 00:07:33.874 "product_name": "GPT Disk", 00:07:33.874 "block_size": 4096, 00:07:33.874 "num_blocks": 655104, 00:07:33.874 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:33.874 "assigned_rate_limits": { 00:07:33.874 "rw_ios_per_sec": 0, 00:07:33.874 "rw_mbytes_per_sec": 0, 00:07:33.874 "r_mbytes_per_sec": 0, 00:07:33.874 "w_mbytes_per_sec": 0 00:07:33.874 }, 00:07:33.874 "claimed": false, 00:07:33.874 "zoned": false, 00:07:33.874 "supported_io_types": { 00:07:33.874 "read": true, 00:07:33.874 "write": true, 00:07:33.874 "unmap": true, 00:07:33.874 "flush": true, 00:07:33.874 "reset": true, 00:07:33.874 "nvme_admin": false, 00:07:33.874 "nvme_io": false, 00:07:33.874 "nvme_io_md": false, 00:07:33.874 "write_zeroes": true, 00:07:33.874 "zcopy": false, 00:07:33.874 "get_zone_info": false, 00:07:33.874 "zone_management": false, 00:07:33.874 "zone_append": false, 00:07:33.874 "compare": true, 00:07:33.874 "compare_and_write": false, 00:07:33.874 "abort": true, 00:07:33.874 "seek_hole": false, 00:07:33.874 "seek_data": false, 00:07:33.874 "copy": true, 00:07:33.874 "nvme_iov_md": false 00:07:33.874 }, 00:07:33.874 "driver_specific": { 00:07:33.874 "gpt": { 00:07:33.874 "base_bdev": "Nvme1n1", 00:07:33.874 "offset_blocks": 256, 00:07:33.874 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:07:33.874 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:33.874 "partition_name": "SPDK_TEST_first" 00:07:33.875 } 00:07:33.875 } 00:07:33.875 } 00:07:33.875 ]' 00:07:33.875 18:16:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:07:33.875 18:16:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:07:33.875 18:16:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:07:33.875 18:16:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:33.875 18:16:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:33.875 18:16:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:33.875 18:16:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:33.875 18:16:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.875 18:16:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:33.875 18:16:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.875 18:16:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:07:33.875 { 00:07:33.875 "name": "Nvme1n1p2", 00:07:33.875 "aliases": [ 00:07:33.875 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:07:33.875 ], 00:07:33.875 "product_name": "GPT Disk", 00:07:33.875 "block_size": 4096, 00:07:33.875 "num_blocks": 655103, 00:07:33.875 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:33.875 "assigned_rate_limits": { 00:07:33.875 "rw_ios_per_sec": 0, 00:07:33.875 "rw_mbytes_per_sec": 0, 00:07:33.875 "r_mbytes_per_sec": 0, 00:07:33.875 "w_mbytes_per_sec": 0 00:07:33.875 }, 00:07:33.875 "claimed": false, 00:07:33.875 "zoned": false, 00:07:33.875 "supported_io_types": { 00:07:33.875 "read": true, 00:07:33.875 "write": true, 00:07:33.875 "unmap": true, 00:07:33.875 "flush": true, 00:07:33.875 "reset": true, 00:07:33.875 "nvme_admin": false, 00:07:33.875 "nvme_io": false, 00:07:33.875 "nvme_io_md": false, 00:07:33.875 "write_zeroes": true, 00:07:33.875 "zcopy": false, 00:07:33.875 "get_zone_info": false, 00:07:33.875 "zone_management": false, 00:07:33.875 "zone_append": false, 00:07:33.875 "compare": true, 00:07:33.875 "compare_and_write": false, 00:07:33.875 "abort": true, 00:07:33.875 "seek_hole": false, 00:07:33.875 "seek_data": false, 00:07:33.875 "copy": true, 00:07:33.875 "nvme_iov_md": false 00:07:33.875 }, 00:07:33.875 "driver_specific": { 00:07:33.875 "gpt": { 00:07:33.875 "base_bdev": "Nvme1n1", 00:07:33.875 "offset_blocks": 655360, 00:07:33.875 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:07:33.875 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:33.875 "partition_name": "SPDK_TEST_second" 00:07:33.875 } 00:07:33.875 } 00:07:33.875 } 00:07:33.875 ]' 00:07:33.875 18:16:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:07:33.875 18:16:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:07:33.875 18:16:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:07:33.875 18:16:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:33.875 18:16:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:33.875 18:16:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:33.875 18:16:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 62068 00:07:33.875 18:16:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 62068 ']' 00:07:33.875 18:16:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 62068 00:07:33.875 18:16:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:07:33.875 18:16:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:33.875 18:16:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62068 00:07:33.875 18:16:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:33.875 killing process with pid 62068 00:07:33.875 18:16:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:33.875 18:16:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62068' 00:07:33.875 18:16:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 62068 00:07:33.875 18:16:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 62068 00:07:35.803 00:07:35.803 real 0m2.960s 00:07:35.803 user 0m3.079s 00:07:35.803 sys 0m0.372s 00:07:35.803 18:16:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:35.803 ************************************ 00:07:35.803 END TEST bdev_gpt_uuid 00:07:35.803 ************************************ 00:07:35.803 18:16:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:35.803 18:16:53 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:07:35.803 18:16:53 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:07:35.803 18:16:53 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:07:35.803 18:16:53 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:35.803 18:16:54 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:35.803 18:16:54 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:07:35.803 18:16:54 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:07:35.803 18:16:54 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:07:35.803 18:16:54 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:35.803 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:36.065 Waiting for block devices as requested 00:07:36.065 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:36.065 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:36.065 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:36.326 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:41.616 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:41.617 18:16:59 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:07:41.617 18:16:59 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:07:41.617 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:07:41.617 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:07:41.617 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:41.617 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:41.617 18:17:00 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:07:41.617 00:07:41.617 real 0m54.879s 00:07:41.617 user 1m10.946s 00:07:41.617 sys 0m7.327s 00:07:41.617 18:17:00 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:41.617 ************************************ 00:07:41.617 END TEST blockdev_nvme_gpt 00:07:41.617 ************************************ 00:07:41.617 18:17:00 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:41.617 18:17:00 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:41.617 18:17:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:41.617 18:17:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:41.617 18:17:00 -- common/autotest_common.sh@10 -- # set +x 00:07:41.617 ************************************ 00:07:41.617 START TEST nvme 00:07:41.617 ************************************ 00:07:41.617 18:17:00 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:41.617 * Looking for test storage... 00:07:41.878 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:07:41.878 18:17:00 nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:41.878 18:17:00 nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:07:41.878 18:17:00 nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:41.878 18:17:00 nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:41.878 18:17:00 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:41.878 18:17:00 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:41.878 18:17:00 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:41.878 18:17:00 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:41.878 18:17:00 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:41.878 18:17:00 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:41.878 18:17:00 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:41.878 18:17:00 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:41.878 18:17:00 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:41.878 18:17:00 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:41.878 18:17:00 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:41.878 18:17:00 nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:41.878 18:17:00 nvme -- scripts/common.sh@345 -- # : 1 00:07:41.878 18:17:00 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:41.878 18:17:00 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:41.878 18:17:00 nvme -- scripts/common.sh@365 -- # decimal 1 00:07:41.878 18:17:00 nvme -- scripts/common.sh@353 -- # local d=1 00:07:41.878 18:17:00 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:41.878 18:17:00 nvme -- scripts/common.sh@355 -- # echo 1 00:07:41.878 18:17:00 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:41.878 18:17:00 nvme -- scripts/common.sh@366 -- # decimal 2 00:07:41.878 18:17:00 nvme -- scripts/common.sh@353 -- # local d=2 00:07:41.878 18:17:00 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:41.878 18:17:00 nvme -- scripts/common.sh@355 -- # echo 2 00:07:41.878 18:17:00 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:41.878 18:17:00 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:41.878 18:17:00 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:41.878 18:17:00 nvme -- scripts/common.sh@368 -- # return 0 00:07:41.878 18:17:00 nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:41.878 18:17:00 nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:41.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.878 --rc genhtml_branch_coverage=1 00:07:41.878 --rc genhtml_function_coverage=1 00:07:41.878 --rc genhtml_legend=1 00:07:41.878 --rc geninfo_all_blocks=1 00:07:41.878 --rc geninfo_unexecuted_blocks=1 00:07:41.878 00:07:41.878 ' 00:07:41.878 18:17:00 nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:41.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.878 --rc genhtml_branch_coverage=1 00:07:41.878 --rc genhtml_function_coverage=1 00:07:41.878 --rc genhtml_legend=1 00:07:41.878 --rc geninfo_all_blocks=1 00:07:41.878 --rc geninfo_unexecuted_blocks=1 00:07:41.878 00:07:41.878 ' 00:07:41.878 18:17:00 nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:41.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.878 --rc genhtml_branch_coverage=1 00:07:41.878 --rc genhtml_function_coverage=1 00:07:41.878 --rc genhtml_legend=1 00:07:41.878 --rc geninfo_all_blocks=1 00:07:41.878 --rc geninfo_unexecuted_blocks=1 00:07:41.878 00:07:41.878 ' 00:07:41.878 18:17:00 nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:41.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.878 --rc genhtml_branch_coverage=1 00:07:41.878 --rc genhtml_function_coverage=1 00:07:41.878 --rc genhtml_legend=1 00:07:41.878 --rc geninfo_all_blocks=1 00:07:41.878 --rc geninfo_unexecuted_blocks=1 00:07:41.878 00:07:41.878 ' 00:07:41.878 18:17:00 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:42.138 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:42.709 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:42.709 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:42.709 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:42.969 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:42.969 18:17:01 nvme -- nvme/nvme.sh@79 -- # uname 00:07:42.969 18:17:01 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:07:42.969 18:17:01 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:07:42.969 18:17:01 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:07:42.969 18:17:01 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:07:42.969 18:17:01 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:07:42.969 18:17:01 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:07:42.969 18:17:01 nvme -- common/autotest_common.sh@1075 -- # stubpid=62703 00:07:42.969 18:17:01 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:07:42.969 Waiting for stub to ready for secondary processes... 00:07:42.969 18:17:01 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:42.969 18:17:01 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/62703 ]] 00:07:42.969 18:17:01 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:07:42.969 18:17:01 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:07:42.969 [2024-11-20 18:17:01.432753] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:07:42.969 [2024-11-20 18:17:01.432873] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:07:43.910 [2024-11-20 18:17:02.195285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:43.910 [2024-11-20 18:17:02.291776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:43.910 [2024-11-20 18:17:02.292079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:43.910 [2024-11-20 18:17:02.292160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:43.910 [2024-11-20 18:17:02.306624] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:07:43.910 [2024-11-20 18:17:02.306676] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:43.910 [2024-11-20 18:17:02.320749] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:07:43.910 [2024-11-20 18:17:02.320946] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:07:43.910 [2024-11-20 18:17:02.325217] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:43.910 [2024-11-20 18:17:02.325673] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:07:43.910 [2024-11-20 18:17:02.325795] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:07:43.910 [2024-11-20 18:17:02.330344] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:43.910 [2024-11-20 18:17:02.330513] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:07:43.910 [2024-11-20 18:17:02.330562] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:07:43.910 [2024-11-20 18:17:02.333057] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:43.910 [2024-11-20 18:17:02.333222] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:07:43.910 [2024-11-20 18:17:02.333271] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:07:43.910 [2024-11-20 18:17:02.333304] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:07:43.910 [2024-11-20 18:17:02.333331] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:07:43.910 done. 00:07:43.910 18:17:02 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:43.910 18:17:02 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:07:43.910 18:17:02 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:43.910 18:17:02 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:07:43.910 18:17:02 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:43.910 18:17:02 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:43.910 ************************************ 00:07:43.910 START TEST nvme_reset 00:07:43.910 ************************************ 00:07:43.910 18:17:02 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:44.172 Initializing NVMe Controllers 00:07:44.172 Skipping QEMU NVMe SSD at 0000:00:13.0 00:07:44.172 Skipping QEMU NVMe SSD at 0000:00:10.0 00:07:44.172 Skipping QEMU NVMe SSD at 0000:00:11.0 00:07:44.172 Skipping QEMU NVMe SSD at 0000:00:12.0 00:07:44.172 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:07:44.172 00:07:44.172 real 0m0.225s 00:07:44.172 user 0m0.072s 00:07:44.172 sys 0m0.108s 00:07:44.172 ************************************ 00:07:44.172 END TEST nvme_reset 00:07:44.172 18:17:02 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:44.172 18:17:02 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:07:44.172 ************************************ 00:07:44.172 18:17:02 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:07:44.172 18:17:02 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:44.172 18:17:02 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:44.172 18:17:02 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:44.172 ************************************ 00:07:44.172 START TEST nvme_identify 00:07:44.172 ************************************ 00:07:44.172 18:17:02 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:07:44.172 18:17:02 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:07:44.172 18:17:02 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:07:44.172 18:17:02 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:07:44.172 18:17:02 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:07:44.172 18:17:02 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:44.172 18:17:02 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:07:44.172 18:17:02 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:44.172 18:17:02 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:44.172 18:17:02 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:44.172 18:17:02 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:44.172 18:17:02 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:44.172 18:17:02 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:07:44.437 [2024-11-20 18:17:02.947925] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 62724 terminated unexpected 00:07:44.437 ===================================================== 00:07:44.437 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:44.437 ===================================================== 00:07:44.437 Controller Capabilities/Features 00:07:44.437 ================================ 00:07:44.437 Vendor ID: 1b36 00:07:44.437 Subsystem Vendor ID: 1af4 00:07:44.437 Serial Number: 12343 00:07:44.437 Model Number: QEMU NVMe Ctrl 00:07:44.437 Firmware Version: 8.0.0 00:07:44.437 Recommended Arb Burst: 6 00:07:44.437 IEEE OUI Identifier: 00 54 52 00:07:44.437 Multi-path I/O 00:07:44.437 May have multiple subsystem ports: No 00:07:44.437 May have multiple controllers: Yes 00:07:44.437 Associated with SR-IOV VF: No 00:07:44.437 Max Data Transfer Size: 524288 00:07:44.437 Max Number of Namespaces: 256 00:07:44.437 Max Number of I/O Queues: 64 00:07:44.437 NVMe Specification Version (VS): 1.4 00:07:44.437 NVMe Specification Version (Identify): 1.4 00:07:44.437 Maximum Queue Entries: 2048 00:07:44.437 Contiguous Queues Required: Yes 00:07:44.437 Arbitration Mechanisms Supported 00:07:44.437 Weighted Round Robin: Not Supported 00:07:44.437 Vendor Specific: Not Supported 00:07:44.437 Reset Timeout: 7500 ms 00:07:44.437 Doorbell Stride: 4 bytes 00:07:44.437 NVM Subsystem Reset: Not Supported 00:07:44.437 Command Sets Supported 00:07:44.437 NVM Command Set: Supported 00:07:44.437 Boot Partition: Not Supported 00:07:44.437 Memory Page Size Minimum: 4096 bytes 00:07:44.437 Memory Page Size Maximum: 65536 bytes 00:07:44.437 Persistent Memory Region: Not Supported 00:07:44.437 Optional Asynchronous Events Supported 00:07:44.437 Namespace Attribute Notices: Supported 00:07:44.437 Firmware Activation Notices: Not Supported 00:07:44.437 ANA Change Notices: Not Supported 00:07:44.437 PLE Aggregate Log Change Notices: Not Supported 00:07:44.437 LBA Status Info Alert Notices: Not Supported 00:07:44.437 EGE Aggregate Log Change Notices: Not Supported 00:07:44.437 Normal NVM Subsystem Shutdown event: Not Supported 00:07:44.437 Zone Descriptor Change Notices: Not Supported 00:07:44.437 Discovery Log Change Notices: Not Supported 00:07:44.437 Controller Attributes 00:07:44.437 128-bit Host Identifier: Not Supported 00:07:44.437 Non-Operational Permissive Mode: Not Supported 00:07:44.437 NVM Sets: Not Supported 00:07:44.437 Read Recovery Levels: Not Supported 00:07:44.437 Endurance Groups: Supported 00:07:44.437 Predictable Latency Mode: Not Supported 00:07:44.437 Traffic Based Keep ALive: Not Supported 00:07:44.437 Namespace Granularity: Not Supported 00:07:44.437 SQ Associations: Not Supported 00:07:44.437 UUID List: Not Supported 00:07:44.437 Multi-Domain Subsystem: Not Supported 00:07:44.437 Fixed Capacity Management: Not Supported 00:07:44.437 Variable Capacity Management: Not Supported 00:07:44.437 Delete Endurance Group: Not Supported 00:07:44.437 Delete NVM Set: Not Supported 00:07:44.437 Extended LBA Formats Supported: Supported 00:07:44.437 Flexible Data Placement Supported: Supported 00:07:44.437 00:07:44.437 Controller Memory Buffer Support 00:07:44.437 ================================ 00:07:44.437 Supported: No 00:07:44.437 00:07:44.437 Persistent Memory Region Support 00:07:44.437 ================================ 00:07:44.437 Supported: No 00:07:44.437 00:07:44.437 Admin Command Set Attributes 00:07:44.437 ============================ 00:07:44.437 Security Send/Receive: Not Supported 00:07:44.437 Format NVM: Supported 00:07:44.437 Firmware Activate/Download: Not Supported 00:07:44.437 Namespace Management: Supported 00:07:44.438 Device Self-Test: Not Supported 00:07:44.438 Directives: Supported 00:07:44.438 NVMe-MI: Not Supported 00:07:44.438 Virtualization Management: Not Supported 00:07:44.438 Doorbell Buffer Config: Supported 00:07:44.438 Get LBA Status Capability: Not Supported 00:07:44.438 Command & Feature Lockdown Capability: Not Supported 00:07:44.438 Abort Command Limit: 4 00:07:44.438 Async Event Request Limit: 4 00:07:44.438 Number of Firmware Slots: N/A 00:07:44.438 Firmware Slot 1 Read-Only: N/A 00:07:44.438 Firmware Activation Without Reset: N/A 00:07:44.438 Multiple Update Detection Support: N/A 00:07:44.438 Firmware Update Granularity: No Information Provided 00:07:44.438 Per-Namespace SMART Log: Yes 00:07:44.438 Asymmetric Namespace Access Log Page: Not Supported 00:07:44.438 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:44.438 Command Effects Log Page: Supported 00:07:44.438 Get Log Page Extended Data: Supported 00:07:44.438 Telemetry Log Pages: Not Supported 00:07:44.438 Persistent Event Log Pages: Not Supported 00:07:44.438 Supported Log Pages Log Page: May Support 00:07:44.438 Commands Supported & Effects Log Page: Not Supported 00:07:44.438 Feature Identifiers & Effects Log Page:May Support 00:07:44.438 NVMe-MI Commands & Effects Log Page: May Support 00:07:44.438 Data Area 4 for Telemetry Log: Not Supported 00:07:44.438 Error Log Page Entries Supported: 1 00:07:44.438 Keep Alive: Not Supported 00:07:44.438 00:07:44.438 NVM Command Set Attributes 00:07:44.438 ========================== 00:07:44.438 Submission Queue Entry Size 00:07:44.438 Max: 64 00:07:44.438 Min: 64 00:07:44.438 Completion Queue Entry Size 00:07:44.438 Max: 16 00:07:44.438 Min: 16 00:07:44.438 Number of Namespaces: 256 00:07:44.438 Compare Command: Supported 00:07:44.438 Write Uncorrectable Command: Not Supported 00:07:44.438 Dataset Management Command: Supported 00:07:44.438 Write Zeroes Command: Supported 00:07:44.438 Set Features Save Field: Supported 00:07:44.438 Reservations: Not Supported 00:07:44.438 Timestamp: Supported 00:07:44.438 Copy: Supported 00:07:44.438 Volatile Write Cache: Present 00:07:44.438 Atomic Write Unit (Normal): 1 00:07:44.438 Atomic Write Unit (PFail): 1 00:07:44.438 Atomic Compare & Write Unit: 1 00:07:44.438 Fused Compare & Write: Not Supported 00:07:44.438 Scatter-Gather List 00:07:44.438 SGL Command Set: Supported 00:07:44.438 SGL Keyed: Not Supported 00:07:44.438 SGL Bit Bucket Descriptor: Not Supported 00:07:44.438 SGL Metadata Pointer: Not Supported 00:07:44.438 Oversized SGL: Not Supported 00:07:44.438 SGL Metadata Address: Not Supported 00:07:44.438 SGL Offset: Not Supported 00:07:44.438 Transport SGL Data Block: Not Supported 00:07:44.438 Replay Protected Memory Block: Not Supported 00:07:44.438 00:07:44.438 Firmware Slot Information 00:07:44.438 ========================= 00:07:44.438 Active slot: 1 00:07:44.438 Slot 1 Firmware Revision: 1.0 00:07:44.438 00:07:44.438 00:07:44.438 Commands Supported and Effects 00:07:44.438 ============================== 00:07:44.438 Admin Commands 00:07:44.438 -------------- 00:07:44.438 Delete I/O Submission Queue (00h): Supported 00:07:44.438 Create I/O Submission Queue (01h): Supported 00:07:44.438 Get Log Page (02h): Supported 00:07:44.438 Delete I/O Completion Queue (04h): Supported 00:07:44.438 Create I/O Completion Queue (05h): Supported 00:07:44.438 Identify (06h): Supported 00:07:44.438 Abort (08h): Supported 00:07:44.438 Set Features (09h): Supported 00:07:44.438 Get Features (0Ah): Supported 00:07:44.438 Asynchronous Event Request (0Ch): Supported 00:07:44.438 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:44.438 Directive Send (19h): Supported 00:07:44.438 Directive Receive (1Ah): Supported 00:07:44.438 Virtualization Management (1Ch): Supported 00:07:44.438 Doorbell Buffer Config (7Ch): Supported 00:07:44.438 Format NVM (80h): Supported LBA-Change 00:07:44.438 I/O Commands 00:07:44.438 ------------ 00:07:44.438 Flush (00h): Supported LBA-Change 00:07:44.438 Write (01h): Supported LBA-Change 00:07:44.438 Read (02h): Supported 00:07:44.438 Compare (05h): Supported 00:07:44.438 Write Zeroes (08h): Supported LBA-Change 00:07:44.438 Dataset Management (09h): Supported LBA-Change 00:07:44.438 Unknown (0Ch): Supported 00:07:44.438 Unknown (12h): Supported 00:07:44.438 Copy (19h): Supported LBA-Change 00:07:44.438 Unknown (1Dh): Supported LBA-Change 00:07:44.438 00:07:44.438 Error Log 00:07:44.438 ========= 00:07:44.438 00:07:44.438 Arbitration 00:07:44.438 =========== 00:07:44.438 Arbitration Burst: no limit 00:07:44.438 00:07:44.438 Power Management 00:07:44.438 ================ 00:07:44.438 Number of Power States: 1 00:07:44.438 Current Power State: Power State #0 00:07:44.438 Power State #0: 00:07:44.438 Max Power: 25.00 W 00:07:44.438 Non-Operational State: Operational 00:07:44.438 Entry Latency: 16 microseconds 00:07:44.438 Exit Latency: 4 microseconds 00:07:44.438 Relative Read Throughput: 0 00:07:44.438 Relative Read Latency: 0 00:07:44.438 Relative Write Throughput: 0 00:07:44.438 Relative Write Latency: 0 00:07:44.438 Idle Power: Not Reported 00:07:44.438 Active Power: Not Reported 00:07:44.438 Non-Operational Permissive Mode: Not Supported 00:07:44.438 00:07:44.438 Health Information 00:07:44.438 ================== 00:07:44.438 Critical Warnings: 00:07:44.438 Available Spare Space: OK 00:07:44.438 Temperature: OK 00:07:44.438 Device Reliability: OK 00:07:44.438 Read Only: No 00:07:44.438 Volatile Memory Backup: OK 00:07:44.438 Current Temperature: 323 Kelvin (50 Celsius) 00:07:44.438 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:44.438 Available Spare: 0% 00:07:44.438 Available Spare Threshold: 0% 00:07:44.438 Life Percentage Used: 0% 00:07:44.438 Data Units Read: 825 00:07:44.438 Data Units Written: 754 00:07:44.438 Host Read Commands: 39939 00:07:44.438 Host Write Commands: 39362 00:07:44.438 Controller Busy Time: 0 minutes 00:07:44.438 Power Cycles: 0 00:07:44.438 Power On Hours: 0 hours 00:07:44.438 Unsafe Shutdowns: 0 00:07:44.438 Unrecoverable Media Errors: 0 00:07:44.438 Lifetime Error Log Entries: 0 00:07:44.438 Warning Temperature Time: 0 minutes 00:07:44.438 Critical Temperature Time: 0 minutes 00:07:44.438 00:07:44.438 Number of Queues 00:07:44.438 ================ 00:07:44.438 Number of I/O Submission Queues: 64 00:07:44.438 Number of I/O Completion Queues: 64 00:07:44.438 00:07:44.438 ZNS Specific Controller Data 00:07:44.438 ============================ 00:07:44.438 Zone Append Size Limit: 0 00:07:44.438 00:07:44.438 00:07:44.438 Active Namespaces 00:07:44.438 ================= 00:07:44.438 Namespace ID:1 00:07:44.438 Error Recovery Timeout: Unlimited 00:07:44.438 Command Set Identifier: NVM (00h) 00:07:44.438 Deallocate: Supported 00:07:44.438 Deallocated/Unwritten Error: Supported 00:07:44.438 Deallocated Read Value: All 0x00 00:07:44.438 Deallocate in Write Zeroes: Not Supported 00:07:44.438 Deallocated Guard Field: 0xFFFF 00:07:44.438 Flush: Supported 00:07:44.438 Reservation: Not Supported 00:07:44.438 Namespace Sharing Capabilities: Multiple Controllers 00:07:44.438 Size (in LBAs): 262144 (1GiB) 00:07:44.438 Capacity (in LBAs): 262144 (1GiB) 00:07:44.438 Utilization (in LBAs): 262144 (1GiB) 00:07:44.438 Thin Provisioning: Not Supported 00:07:44.438 Per-NS Atomic Units: No 00:07:44.438 Maximum Single Source Range Length: 128 00:07:44.438 Maximum Copy Length: 128 00:07:44.438 Maximum Source Range Count: 128 00:07:44.438 NGUID/EUI64 Never Reused: No 00:07:44.438 Namespace Write Protected: No 00:07:44.438 Endurance group ID: 1 00:07:44.438 Number of LBA Formats: 8 00:07:44.438 Current LBA Format: LBA Format #04 00:07:44.438 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:44.438 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:44.439 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:44.439 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:44.439 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:44.439 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:44.439 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:44.439 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:44.439 00:07:44.439 Get Feature FDP: 00:07:44.439 ================ 00:07:44.439 Enabled: Yes 00:07:44.439 FDP configuration index: 0 00:07:44.439 00:07:44.439 FDP configurations log page 00:07:44.439 =========================== 00:07:44.439 Number of FDP configurations: 1 00:07:44.439 Version: 0 00:07:44.439 Size: 112 00:07:44.439 FDP Configuration Descriptor: 0 00:07:44.439 Descriptor Size: 96 00:07:44.439 Reclaim Group Identifier format: 2 00:07:44.439 FDP Volatile Write Cache: Not Present 00:07:44.439 FDP Configuration: Valid 00:07:44.439 Vendor Specific Size: 0 00:07:44.439 Number of Reclaim Groups: 2 00:07:44.439 Number of Recalim Unit Handles: 8 00:07:44.439 Max Placement Identifiers: 128 00:07:44.439 Number of Namespaces Suppprted: 256 00:07:44.439 Reclaim unit Nominal Size: 6000000 bytes 00:07:44.439 Estimated Reclaim Unit Time Limit: Not Reported 00:07:44.439 RUH Desc #000: RUH Type: Initially Isolated 00:07:44.439 RUH Desc #001: RUH Type: Initially Isolated 00:07:44.439 RUH Desc #002: RUH Type: Initially Isolated 00:07:44.439 RUH Desc #003: RUH Type: Initially Isolated 00:07:44.439 RUH Desc #004: RUH Type: Initially Isolated 00:07:44.439 RUH Desc #005: RUH Type: Initially Isolated 00:07:44.439 RUH Desc #006: RUH Type: Initially Isolated 00:07:44.439 RUH Desc #007: RUH Type: Initially Isolated 00:07:44.439 00:07:44.439 FDP reclaim unit handle usage log page 00:07:44.439 ==================================[2024-11-20 18:17:02.951293] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 62724 terminated unexpected 00:07:44.439 ==== 00:07:44.439 Number of Reclaim Unit Handles: 8 00:07:44.439 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:44.439 RUH Usage Desc #001: RUH Attributes: Unused 00:07:44.439 RUH Usage Desc #002: RUH Attributes: Unused 00:07:44.439 RUH Usage Desc #003: RUH Attributes: Unused 00:07:44.439 RUH Usage Desc #004: RUH Attributes: Unused 00:07:44.439 RUH Usage Desc #005: RUH Attributes: Unused 00:07:44.439 RUH Usage Desc #006: RUH Attributes: Unused 00:07:44.439 RUH Usage Desc #007: RUH Attributes: Unused 00:07:44.439 00:07:44.439 FDP statistics log page 00:07:44.439 ======================= 00:07:44.439 Host bytes with metadata written: 486514688 00:07:44.439 Media bytes with metadata written: 486559744 00:07:44.439 Media bytes erased: 0 00:07:44.439 00:07:44.439 FDP events log page 00:07:44.439 =================== 00:07:44.439 Number of FDP events: 0 00:07:44.439 00:07:44.439 NVM Specific Namespace Data 00:07:44.439 =========================== 00:07:44.439 Logical Block Storage Tag Mask: 0 00:07:44.439 Protection Information Capabilities: 00:07:44.439 16b Guard Protection Information Storage Tag Support: No 00:07:44.439 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:44.439 Storage Tag Check Read Support: No 00:07:44.439 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.439 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.439 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.439 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.439 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.439 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.439 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.439 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.439 ===================================================== 00:07:44.439 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:44.439 ===================================================== 00:07:44.439 Controller Capabilities/Features 00:07:44.439 ================================ 00:07:44.439 Vendor ID: 1b36 00:07:44.439 Subsystem Vendor ID: 1af4 00:07:44.439 Serial Number: 12340 00:07:44.439 Model Number: QEMU NVMe Ctrl 00:07:44.439 Firmware Version: 8.0.0 00:07:44.439 Recommended Arb Burst: 6 00:07:44.439 IEEE OUI Identifier: 00 54 52 00:07:44.439 Multi-path I/O 00:07:44.439 May have multiple subsystem ports: No 00:07:44.439 May have multiple controllers: No 00:07:44.439 Associated with SR-IOV VF: No 00:07:44.439 Max Data Transfer Size: 524288 00:07:44.439 Max Number of Namespaces: 256 00:07:44.439 Max Number of I/O Queues: 64 00:07:44.439 NVMe Specification Version (VS): 1.4 00:07:44.439 NVMe Specification Version (Identify): 1.4 00:07:44.439 Maximum Queue Entries: 2048 00:07:44.439 Contiguous Queues Required: Yes 00:07:44.439 Arbitration Mechanisms Supported 00:07:44.439 Weighted Round Robin: Not Supported 00:07:44.439 Vendor Specific: Not Supported 00:07:44.439 Reset Timeout: 7500 ms 00:07:44.439 Doorbell Stride: 4 bytes 00:07:44.439 NVM Subsystem Reset: Not Supported 00:07:44.439 Command Sets Supported 00:07:44.439 NVM Command Set: Supported 00:07:44.439 Boot Partition: Not Supported 00:07:44.439 Memory Page Size Minimum: 4096 bytes 00:07:44.439 Memory Page Size Maximum: 65536 bytes 00:07:44.439 Persistent Memory Region: Not Supported 00:07:44.439 Optional Asynchronous Events Supported 00:07:44.439 Namespace Attribute Notices: Supported 00:07:44.439 Firmware Activation Notices: Not Supported 00:07:44.439 ANA Change Notices: Not Supported 00:07:44.439 PLE Aggregate Log Change Notices: Not Supported 00:07:44.439 LBA Status Info Alert Notices: Not Supported 00:07:44.439 EGE Aggregate Log Change Notices: Not Supported 00:07:44.439 Normal NVM Subsystem Shutdown event: Not Supported 00:07:44.439 Zone Descriptor Change Notices: Not Supported 00:07:44.439 Discovery Log Change Notices: Not Supported 00:07:44.439 Controller Attributes 00:07:44.439 128-bit Host Identifier: Not Supported 00:07:44.439 Non-Operational Permissive Mode: Not Supported 00:07:44.439 NVM Sets: Not Supported 00:07:44.439 Read Recovery Levels: Not Supported 00:07:44.439 Endurance Groups: Not Supported 00:07:44.439 Predictable Latency Mode: Not Supported 00:07:44.439 Traffic Based Keep ALive: Not Supported 00:07:44.439 Namespace Granularity: Not Supported 00:07:44.439 SQ Associations: Not Supported 00:07:44.439 UUID List: Not Supported 00:07:44.439 Multi-Domain Subsystem: Not Supported 00:07:44.439 Fixed Capacity Management: Not Supported 00:07:44.439 Variable Capacity Management: Not Supported 00:07:44.440 Delete Endurance Group: Not Supported 00:07:44.440 Delete NVM Set: Not Supported 00:07:44.440 Extended LBA Formats Supported: Supported 00:07:44.440 Flexible Data Placement Supported: Not Supported 00:07:44.440 00:07:44.440 Controller Memory Buffer Support 00:07:44.440 ================================ 00:07:44.440 Supported: No 00:07:44.440 00:07:44.440 Persistent Memory Region Support 00:07:44.440 ================================ 00:07:44.440 Supported: No 00:07:44.440 00:07:44.440 Admin Command Set Attributes 00:07:44.440 ============================ 00:07:44.440 Security Send/Receive: Not Supported 00:07:44.440 Format NVM: Supported 00:07:44.440 Firmware Activate/Download: Not Supported 00:07:44.440 Namespace Management: Supported 00:07:44.440 Device Self-Test: Not Supported 00:07:44.440 Directives: Supported 00:07:44.440 NVMe-MI: Not Supported 00:07:44.440 Virtualization Management: Not Supported 00:07:44.440 Doorbell Buffer Config: Supported 00:07:44.440 Get LBA Status Capability: Not Supported 00:07:44.440 Command & Feature Lockdown Capability: Not Supported 00:07:44.440 Abort Command Limit: 4 00:07:44.440 Async Event Request Limit: 4 00:07:44.440 Number of Firmware Slots: N/A 00:07:44.440 Firmware Slot 1 Read-Only: N/A 00:07:44.440 Firmware Activation Without Reset: N/A 00:07:44.440 Multiple Update Detection Support: N/A 00:07:44.440 Firmware Update Granularity: No Information Provided 00:07:44.440 Per-Namespace SMART Log: Yes 00:07:44.440 Asymmetric Namespace Access Log Page: Not Supported 00:07:44.440 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:44.440 Command Effects Log Page: Supported 00:07:44.440 Get Log Page Extended Data: Supported 00:07:44.440 Telemetry Log Pages: Not Supported 00:07:44.440 Persistent Event Log Pages: Not Supported 00:07:44.440 Supported Log Pages Log Page: May Support 00:07:44.440 Commands Supported & Effects Log Page: Not Supported 00:07:44.440 Feature Identifiers & Effects Log Page:May Support 00:07:44.440 NVMe-MI Commands & Effects Log Page: May Support 00:07:44.440 Data Area 4 for Telemetry Log: Not Supported 00:07:44.440 Error Log Page Entries Supported: 1 00:07:44.440 Keep Alive: Not Supported 00:07:44.440 00:07:44.440 NVM Command Set Attributes 00:07:44.440 ========================== 00:07:44.440 Submission Queue Entry Size 00:07:44.440 Max: 64 00:07:44.440 Min: 64 00:07:44.440 Completion Queue Entry Size 00:07:44.440 Max: 16 00:07:44.440 Min: 16 00:07:44.440 Number of Namespaces: 256 00:07:44.440 Compare Command: Supported 00:07:44.440 Write Uncorrectable Command: Not Supported 00:07:44.440 Dataset Management Command: Supported 00:07:44.440 Write Zeroes Command: Supported 00:07:44.440 Set Features Save Field: Supported 00:07:44.440 Reservations: Not Supported 00:07:44.440 Timestamp: Supported 00:07:44.440 Copy: Supported 00:07:44.440 Volatile Write Cache: Present 00:07:44.440 Atomic Write Unit (Normal): 1 00:07:44.440 Atomic Write Unit (PFail): 1 00:07:44.440 Atomic Compare & Write Unit: 1 00:07:44.440 Fused Compare & Write: Not Supported 00:07:44.440 Scatter-Gather List 00:07:44.440 SGL Command Set: Supported 00:07:44.440 SGL Keyed: Not Supported 00:07:44.440 SGL Bit Bucket Descriptor: Not Supported 00:07:44.440 SGL Metadata Pointer: Not Supported 00:07:44.440 Oversized SGL: Not Supported 00:07:44.440 SGL Metadata Address: Not Supported 00:07:44.440 SGL Offset: Not Supported 00:07:44.440 Transport SGL Data Block: Not Supported 00:07:44.440 Replay Protected Memory Block: Not Supported 00:07:44.440 00:07:44.440 Firmware Slot Information 00:07:44.440 ========================= 00:07:44.440 Active slot: 1 00:07:44.440 Slot 1 Firmware Revision: 1.0 00:07:44.440 00:07:44.440 00:07:44.440 Commands Supported and Effects 00:07:44.440 ============================== 00:07:44.440 Admin Commands 00:07:44.440 -------------- 00:07:44.440 Delete I/O Submission Queue (00h): Supported 00:07:44.440 Create I/O Submission Queue (01h): Supported 00:07:44.440 Get Log Page (02h): Supported 00:07:44.440 Delete I/O Completion Queue (04h): Supported 00:07:44.440 Create I/O Completion Queue (05h): Supported 00:07:44.440 Identify (06h): Supported 00:07:44.440 Abort (08h): Supported 00:07:44.440 Set Features (09h): Supported 00:07:44.440 Get Features (0Ah): Supported 00:07:44.440 Asynchronous Event Request (0Ch): Supported 00:07:44.440 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:44.440 Directive Send (19h): Supported 00:07:44.440 Directive Receive (1Ah): Supported 00:07:44.440 Virtualization Management (1Ch): Supported 00:07:44.440 Doorbell Buffer Config (7Ch): Supported 00:07:44.440 Format NVM (80h): Supported LBA-Change 00:07:44.440 I/O Commands 00:07:44.440 ------------ 00:07:44.440 Flush (00h): Supported LBA-Change 00:07:44.440 Write (01h): Supported LBA-Change 00:07:44.440 Read (02h): Supported 00:07:44.440 Compare (05h): Supported 00:07:44.440 Write Zeroes (08h): Supported LBA-Change 00:07:44.440 Dataset Management (09h): Supported LBA-Change 00:07:44.440 Unknown (0Ch): Supported 00:07:44.440 Unknown (12h): Supported 00:07:44.440 Copy (19h): Supported LBA-Change 00:07:44.440 Unknown (1Dh): Supported LBA-Change 00:07:44.440 00:07:44.440 Error Log 00:07:44.440 ========= 00:07:44.440 00:07:44.440 Arbitration 00:07:44.440 =========== 00:07:44.440 Arbitration Burst: no limit 00:07:44.440 00:07:44.440 Power Management 00:07:44.440 ================ 00:07:44.440 Number of Power States: 1 00:07:44.440 Current Power State: Power State #0 00:07:44.440 Power State #0: 00:07:44.440 Max Power: 25.00 W 00:07:44.440 Non-Operational State: Operational 00:07:44.440 Entry Latency: 16 microseconds 00:07:44.440 Exit Latency: 4 microseconds 00:07:44.440 Relative Read Throughput: 0 00:07:44.440 Relative Read Latency: 0 00:07:44.440 Relative Write Throughput: 0 00:07:44.440 Relative Write Latency: 0 00:07:44.440 Idle Power: Not Reported 00:07:44.440 Active Power: Not Reported 00:07:44.440 Non-Operational Permissive Mode: Not Supported 00:07:44.440 00:07:44.440 Health Information 00:07:44.440 ================== 00:07:44.440 Critical Warnings: 00:07:44.440 Available Spare Space: OK 00:07:44.440 Temperature: OK 00:07:44.440 Device Reliability: OK 00:07:44.440 Read Only: No 00:07:44.440 Volatile Memory Backup: OK 00:07:44.440 Current Temperature: 323 Kelvin (50 Celsius) 00:07:44.440 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:44.440 Available Spare: 0% 00:07:44.440 Available Spare Threshold: 0% 00:07:44.440 Life Percentage Used: 0% 00:07:44.440 Data Units Read: 734 00:07:44.440 Data Units Written: 662 00:07:44.440 Host Read Commands: 38754 00:07:44.440 Host Write Commands: 38540 00:07:44.441 Controller Busy Time: 0 minutes 00:07:44.441 Power Cycles: 0 00:07:44.441 Power On Hours: 0 hours 00:07:44.441 Unsafe Shutdowns: 0 00:07:44.441 Unrecoverable Media Errors: 0 00:07:44.441 Lifetime Error Log Entries: 0 00:07:44.441 Warning Temperature Time: 0 minutes 00:07:44.441 Critical Temperature Time: 0 minutes 00:07:44.441 00:07:44.441 Number of Queues 00:07:44.441 ================ 00:07:44.441 Number of I/O Submission Queues: 64 00:07:44.441 Number of I/O Completion Queues: 64 00:07:44.441 00:07:44.441 ZNS Specific Controller Data 00:07:44.441 ============================ 00:07:44.441 Zone Append Size Limit: 0 00:07:44.441 00:07:44.441 00:07:44.441 Active Namespaces 00:07:44.441 ================= 00:07:44.441 Namespace ID:1 00:07:44.441 Error Recovery Timeout: Unlimited 00:07:44.441 Command Set Identifier: NVM (00h) 00:07:44.441 Deallocate: Supported 00:07:44.441 Deallocated/Unwritten Error: Supported 00:07:44.441 Deallocated Read Value: All 0x00 00:07:44.441 Deallocate in Write Zeroes: Not Supported 00:07:44.441 Deallocated Guard Field: 0xFFFF 00:07:44.441 Flush: Supported 00:07:44.441 Reservation: Not Supported 00:07:44.441 Metadata Transferred as: Separate Metadata Buffer 00:07:44.441 Namespace Sharing Capabilities: Private 00:07:44.441 Size (in LBAs): 1548666 (5GiB) 00:07:44.441 Capacity (in LBAs): 1548666 (5GiB) 00:07:44.441 Utilization (in LBAs): 1548666 (5GiB) 00:07:44.441 Thin Provisioning: Not Supported 00:07:44.441 Per-NS Atomic Units: No 00:07:44.441 Maximum Single Source Range Length: 128 00:07:44.441 Maximum Copy Length: 128 00:07:44.441 Maximum Source Range Count: 128 00:07:44.441 NGUID/EUI64 Never Reused: No 00:07:44.441 Namespace Write Protected: No 00:07:44.441 Number of LBA Formats: 8 00:07:44.441 Current LBA Format: LBA Format #07 00:07:44.441 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:44.441 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:44.441 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:44.441 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:44.441 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:44.441 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:44.441 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:44.441 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:44.441 00:07:44.441 NVM Specific Namespace Data 00:07:44.441 =========================== 00:07:44.441 Logical Block Storage Tag Mask: 0 00:07:44.441 Protection Information Capabilities: 00:07:44.441 16b Guard Protection Information Storage Tag Support: No 00:07:44.441 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:44.441 Storage Tag Check Read Support: No 00:07:44.441 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.441 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.441 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.441 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.441 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.441 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.441 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.441 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.441 ===================================================== 00:07:44.441 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:44.441 ===================================================== 00:07:44.441 Controller Capabilities/Features 00:07:44.441 ================================ 00:07:44.441 Vendor ID: 1b36 00:07:44.441 Subsystem Vendor ID: 1af4 00:07:44.441 Serial Number: 12341 00:07:44.441 Model Number: QEMU NVMe Ctrl 00:07:44.441 Firmware Version: 8.0.0 00:07:44.441 Recommended Arb Burst: 6 00:07:44.441 IEEE OUI Identifier: 00 54 52 00:07:44.441 Multi-path I/O 00:07:44.441 May have multiple subsystem ports: No 00:07:44.441 May have multiple controllers: No 00:07:44.441 Associated with SR-IOV VF: No 00:07:44.441 Max Data Transfer Size: 524288 00:07:44.441 Max Number of Namespaces: 256 00:07:44.441 Max Number of I/O Queues: 64 00:07:44.441 NVMe Specification Version (VS): 1.4 00:07:44.441 NVMe Specification Version (Identify): 1.4 00:07:44.441 Maximum Queue Entries: 2048 00:07:44.441 Contiguous Queues Required: Yes 00:07:44.441 Arbitration Mechanisms Supported 00:07:44.441 Weighted Round Robin: Not Supported 00:07:44.441 Vendor Specific: Not Supported 00:07:44.441 Reset Timeout: 7500 ms 00:07:44.441 Doorbell Stride: 4 bytes 00:07:44.441 NVM Subsystem Reset: Not Supported 00:07:44.441 Command Sets Supported 00:07:44.441 NVM Command Set: Supported 00:07:44.441 Boot Partition: Not Supported 00:07:44.441 Memory Page Size Minimum: 4096 bytes 00:07:44.441 Memory Page Size Maximum: 65536 bytes 00:07:44.441 Persistent Memory Region: Not Supported 00:07:44.441 Optional Asynchronous Events Supported 00:07:44.441 Namespace Attribute Notices: Supported 00:07:44.441 Firmware Activation Notices: Not Supported 00:07:44.441 ANA Change Notices: Not Supported 00:07:44.441 PLE Aggregate Log Change Notices: Not Supported 00:07:44.441 LBA Status Info Alert Notices: Not Supported 00:07:44.441 EGE Aggregate Log Change Notices: Not Supported 00:07:44.441 Normal NVM Subsystem Shutdown event: Not Supported 00:07:44.441 Zone Descriptor Change Notices: Not Supported 00:07:44.441 Discovery Log Change Notices: Not Supported 00:07:44.441 Controller Attributes 00:07:44.441 128-bit Host Identifier: Not Supported 00:07:44.441 Non-Operational Permissive Mode: Not Supported 00:07:44.441 NVM Sets: Not Supported 00:07:44.441 Read Recovery Levels: Not Supported 00:07:44.441 Endurance Groups: Not Supported 00:07:44.441 Predictable Latency Mode: Not Supported 00:07:44.441 Traffic Based Keep ALive: Not Supported 00:07:44.441 Namespace Granularity: Not Supported 00:07:44.441 SQ Associations: Not Supported 00:07:44.441 UUI[2024-11-20 18:17:02.952464] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 62724 terminated unexpected 00:07:44.441 D List: Not Supported 00:07:44.441 Multi-Domain Subsystem: Not Supported 00:07:44.441 Fixed Capacity Management: Not Supported 00:07:44.441 Variable Capacity Management: Not Supported 00:07:44.441 Delete Endurance Group: Not Supported 00:07:44.441 Delete NVM Set: Not Supported 00:07:44.441 Extended LBA Formats Supported: Supported 00:07:44.441 Flexible Data Placement Supported: Not Supported 00:07:44.441 00:07:44.441 Controller Memory Buffer Support 00:07:44.441 ================================ 00:07:44.441 Supported: No 00:07:44.441 00:07:44.441 Persistent Memory Region Support 00:07:44.441 ================================ 00:07:44.441 Supported: No 00:07:44.441 00:07:44.441 Admin Command Set Attributes 00:07:44.441 ============================ 00:07:44.441 Security Send/Receive: Not Supported 00:07:44.441 Format NVM: Supported 00:07:44.441 Firmware Activate/Download: Not Supported 00:07:44.441 Namespace Management: Supported 00:07:44.441 Device Self-Test: Not Supported 00:07:44.441 Directives: Supported 00:07:44.441 NVMe-MI: Not Supported 00:07:44.441 Virtualization Management: Not Supported 00:07:44.441 Doorbell Buffer Config: Supported 00:07:44.441 Get LBA Status Capability: Not Supported 00:07:44.441 Command & Feature Lockdown Capability: Not Supported 00:07:44.441 Abort Command Limit: 4 00:07:44.441 Async Event Request Limit: 4 00:07:44.441 Number of Firmware Slots: N/A 00:07:44.441 Firmware Slot 1 Read-Only: N/A 00:07:44.441 Firmware Activation Without Reset: N/A 00:07:44.441 Multiple Update Detection Support: N/A 00:07:44.442 Firmware Update Granularity: No Information Provided 00:07:44.442 Per-Namespace SMART Log: Yes 00:07:44.442 Asymmetric Namespace Access Log Page: Not Supported 00:07:44.442 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:44.442 Command Effects Log Page: Supported 00:07:44.442 Get Log Page Extended Data: Supported 00:07:44.442 Telemetry Log Pages: Not Supported 00:07:44.442 Persistent Event Log Pages: Not Supported 00:07:44.442 Supported Log Pages Log Page: May Support 00:07:44.442 Commands Supported & Effects Log Page: Not Supported 00:07:44.442 Feature Identifiers & Effects Log Page:May Support 00:07:44.442 NVMe-MI Commands & Effects Log Page: May Support 00:07:44.442 Data Area 4 for Telemetry Log: Not Supported 00:07:44.442 Error Log Page Entries Supported: 1 00:07:44.442 Keep Alive: Not Supported 00:07:44.442 00:07:44.442 NVM Command Set Attributes 00:07:44.442 ========================== 00:07:44.442 Submission Queue Entry Size 00:07:44.442 Max: 64 00:07:44.442 Min: 64 00:07:44.442 Completion Queue Entry Size 00:07:44.442 Max: 16 00:07:44.442 Min: 16 00:07:44.442 Number of Namespaces: 256 00:07:44.442 Compare Command: Supported 00:07:44.442 Write Uncorrectable Command: Not Supported 00:07:44.442 Dataset Management Command: Supported 00:07:44.442 Write Zeroes Command: Supported 00:07:44.442 Set Features Save Field: Supported 00:07:44.442 Reservations: Not Supported 00:07:44.442 Timestamp: Supported 00:07:44.442 Copy: Supported 00:07:44.442 Volatile Write Cache: Present 00:07:44.442 Atomic Write Unit (Normal): 1 00:07:44.442 Atomic Write Unit (PFail): 1 00:07:44.442 Atomic Compare & Write Unit: 1 00:07:44.442 Fused Compare & Write: Not Supported 00:07:44.442 Scatter-Gather List 00:07:44.442 SGL Command Set: Supported 00:07:44.442 SGL Keyed: Not Supported 00:07:44.442 SGL Bit Bucket Descriptor: Not Supported 00:07:44.442 SGL Metadata Pointer: Not Supported 00:07:44.442 Oversized SGL: Not Supported 00:07:44.442 SGL Metadata Address: Not Supported 00:07:44.442 SGL Offset: Not Supported 00:07:44.442 Transport SGL Data Block: Not Supported 00:07:44.442 Replay Protected Memory Block: Not Supported 00:07:44.442 00:07:44.442 Firmware Slot Information 00:07:44.442 ========================= 00:07:44.442 Active slot: 1 00:07:44.442 Slot 1 Firmware Revision: 1.0 00:07:44.442 00:07:44.442 00:07:44.442 Commands Supported and Effects 00:07:44.442 ============================== 00:07:44.442 Admin Commands 00:07:44.442 -------------- 00:07:44.442 Delete I/O Submission Queue (00h): Supported 00:07:44.442 Create I/O Submission Queue (01h): Supported 00:07:44.442 Get Log Page (02h): Supported 00:07:44.442 Delete I/O Completion Queue (04h): Supported 00:07:44.442 Create I/O Completion Queue (05h): Supported 00:07:44.442 Identify (06h): Supported 00:07:44.442 Abort (08h): Supported 00:07:44.442 Set Features (09h): Supported 00:07:44.442 Get Features (0Ah): Supported 00:07:44.442 Asynchronous Event Request (0Ch): Supported 00:07:44.442 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:44.442 Directive Send (19h): Supported 00:07:44.442 Directive Receive (1Ah): Supported 00:07:44.442 Virtualization Management (1Ch): Supported 00:07:44.442 Doorbell Buffer Config (7Ch): Supported 00:07:44.442 Format NVM (80h): Supported LBA-Change 00:07:44.442 I/O Commands 00:07:44.442 ------------ 00:07:44.442 Flush (00h): Supported LBA-Change 00:07:44.442 Write (01h): Supported LBA-Change 00:07:44.442 Read (02h): Supported 00:07:44.442 Compare (05h): Supported 00:07:44.442 Write Zeroes (08h): Supported LBA-Change 00:07:44.442 Dataset Management (09h): Supported LBA-Change 00:07:44.442 Unknown (0Ch): Supported 00:07:44.442 Unknown (12h): Supported 00:07:44.442 Copy (19h): Supported LBA-Change 00:07:44.442 Unknown (1Dh): Supported LBA-Change 00:07:44.442 00:07:44.442 Error Log 00:07:44.442 ========= 00:07:44.442 00:07:44.442 Arbitration 00:07:44.442 =========== 00:07:44.442 Arbitration Burst: no limit 00:07:44.442 00:07:44.442 Power Management 00:07:44.442 ================ 00:07:44.442 Number of Power States: 1 00:07:44.442 Current Power State: Power State #0 00:07:44.442 Power State #0: 00:07:44.442 Max Power: 25.00 W 00:07:44.442 Non-Operational State: Operational 00:07:44.442 Entry Latency: 16 microseconds 00:07:44.442 Exit Latency: 4 microseconds 00:07:44.442 Relative Read Throughput: 0 00:07:44.442 Relative Read Latency: 0 00:07:44.442 Relative Write Throughput: 0 00:07:44.442 Relative Write Latency: 0 00:07:44.442 Idle Power: Not Reported 00:07:44.442 Active Power: Not Reported 00:07:44.442 Non-Operational Permissive Mode: Not Supported 00:07:44.442 00:07:44.442 Health Information 00:07:44.442 ================== 00:07:44.442 Critical Warnings: 00:07:44.442 Available Spare Space: OK 00:07:44.442 Temperature: OK 00:07:44.442 Device Reliability: OK 00:07:44.442 Read Only: No 00:07:44.442 Volatile Memory Backup: OK 00:07:44.442 Current Temperature: 323 Kelvin (50 Celsius) 00:07:44.442 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:44.442 Available Spare: 0% 00:07:44.442 Available Spare Threshold: 0% 00:07:44.442 Life Percentage Used: 0% 00:07:44.442 Data Units Read: 1055 00:07:44.442 Data Units Written: 923 00:07:44.442 Host Read Commands: 57202 00:07:44.442 Host Write Commands: 55999 00:07:44.442 Controller Busy Time: 0 minutes 00:07:44.442 Power Cycles: 0 00:07:44.442 Power On Hours: 0 hours 00:07:44.442 Unsafe Shutdowns: 0 00:07:44.442 Unrecoverable Media Errors: 0 00:07:44.442 Lifetime Error Log Entries: 0 00:07:44.442 Warning Temperature Time: 0 minutes 00:07:44.442 Critical Temperature Time: 0 minutes 00:07:44.442 00:07:44.442 Number of Queues 00:07:44.442 ================ 00:07:44.442 Number of I/O Submission Queues: 64 00:07:44.442 Number of I/O Completion Queues: 64 00:07:44.442 00:07:44.442 ZNS Specific Controller Data 00:07:44.442 ============================ 00:07:44.442 Zone Append Size Limit: 0 00:07:44.442 00:07:44.442 00:07:44.442 Active Namespaces 00:07:44.442 ================= 00:07:44.442 Namespace ID:1 00:07:44.442 Error Recovery Timeout: Unlimited 00:07:44.442 Command Set Identifier: NVM (00h) 00:07:44.442 Deallocate: Supported 00:07:44.442 Deallocated/Unwritten Error: Supported 00:07:44.442 Deallocated Read Value: All 0x00 00:07:44.442 Deallocate in Write Zeroes: Not Supported 00:07:44.442 Deallocated Guard Field: 0xFFFF 00:07:44.442 Flush: Supported 00:07:44.442 Reservation: Not Supported 00:07:44.442 Namespace Sharing Capabilities: Private 00:07:44.443 Size (in LBAs): 1310720 (5GiB) 00:07:44.443 Capacity (in LBAs): 1310720 (5GiB) 00:07:44.443 Utilization (in LBAs): 1310720 (5GiB) 00:07:44.443 Thin Provisioning: Not Supported 00:07:44.443 Per-NS Atomic Units: No 00:07:44.443 Maximum Single Source Range Length: 128 00:07:44.443 Maximum Copy Length: 128 00:07:44.443 Maximum Source Range Count: 128 00:07:44.443 NGUID/EUI64 Never Reused: No 00:07:44.443 Namespace Write Protected: No 00:07:44.443 Number of LBA Formats: 8 00:07:44.443 Current LBA Format: LBA Format #04 00:07:44.443 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:44.443 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:44.443 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:44.443 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:44.443 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:44.443 LBA Forma[2024-11-20 18:17:02.953004] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 62724 terminated unexpected 00:07:44.443 t #05: Data Size: 4096 Metadata Size: 8 00:07:44.443 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:44.443 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:44.443 00:07:44.443 NVM Specific Namespace Data 00:07:44.443 =========================== 00:07:44.443 Logical Block Storage Tag Mask: 0 00:07:44.443 Protection Information Capabilities: 00:07:44.443 16b Guard Protection Information Storage Tag Support: No 00:07:44.443 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:44.443 Storage Tag Check Read Support: No 00:07:44.443 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.443 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.443 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.443 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.443 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.443 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.443 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.443 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.443 ===================================================== 00:07:44.443 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:44.443 ===================================================== 00:07:44.443 Controller Capabilities/Features 00:07:44.443 ================================ 00:07:44.443 Vendor ID: 1b36 00:07:44.443 Subsystem Vendor ID: 1af4 00:07:44.443 Serial Number: 12342 00:07:44.443 Model Number: QEMU NVMe Ctrl 00:07:44.443 Firmware Version: 8.0.0 00:07:44.443 Recommended Arb Burst: 6 00:07:44.443 IEEE OUI Identifier: 00 54 52 00:07:44.443 Multi-path I/O 00:07:44.443 May have multiple subsystem ports: No 00:07:44.443 May have multiple controllers: No 00:07:44.443 Associated with SR-IOV VF: No 00:07:44.443 Max Data Transfer Size: 524288 00:07:44.443 Max Number of Namespaces: 256 00:07:44.443 Max Number of I/O Queues: 64 00:07:44.443 NVMe Specification Version (VS): 1.4 00:07:44.443 NVMe Specification Version (Identify): 1.4 00:07:44.443 Maximum Queue Entries: 2048 00:07:44.443 Contiguous Queues Required: Yes 00:07:44.443 Arbitration Mechanisms Supported 00:07:44.443 Weighted Round Robin: Not Supported 00:07:44.443 Vendor Specific: Not Supported 00:07:44.443 Reset Timeout: 7500 ms 00:07:44.443 Doorbell Stride: 4 bytes 00:07:44.443 NVM Subsystem Reset: Not Supported 00:07:44.443 Command Sets Supported 00:07:44.443 NVM Command Set: Supported 00:07:44.443 Boot Partition: Not Supported 00:07:44.443 Memory Page Size Minimum: 4096 bytes 00:07:44.443 Memory Page Size Maximum: 65536 bytes 00:07:44.443 Persistent Memory Region: Not Supported 00:07:44.443 Optional Asynchronous Events Supported 00:07:44.443 Namespace Attribute Notices: Supported 00:07:44.443 Firmware Activation Notices: Not Supported 00:07:44.443 ANA Change Notices: Not Supported 00:07:44.443 PLE Aggregate Log Change Notices: Not Supported 00:07:44.443 LBA Status Info Alert Notices: Not Supported 00:07:44.443 EGE Aggregate Log Change Notices: Not Supported 00:07:44.443 Normal NVM Subsystem Shutdown event: Not Supported 00:07:44.443 Zone Descriptor Change Notices: Not Supported 00:07:44.443 Discovery Log Change Notices: Not Supported 00:07:44.443 Controller Attributes 00:07:44.443 128-bit Host Identifier: Not Supported 00:07:44.443 Non-Operational Permissive Mode: Not Supported 00:07:44.443 NVM Sets: Not Supported 00:07:44.443 Read Recovery Levels: Not Supported 00:07:44.443 Endurance Groups: Not Supported 00:07:44.443 Predictable Latency Mode: Not Supported 00:07:44.443 Traffic Based Keep ALive: Not Supported 00:07:44.443 Namespace Granularity: Not Supported 00:07:44.443 SQ Associations: Not Supported 00:07:44.443 UUID List: Not Supported 00:07:44.443 Multi-Domain Subsystem: Not Supported 00:07:44.443 Fixed Capacity Management: Not Supported 00:07:44.443 Variable Capacity Management: Not Supported 00:07:44.443 Delete Endurance Group: Not Supported 00:07:44.443 Delete NVM Set: Not Supported 00:07:44.443 Extended LBA Formats Supported: Supported 00:07:44.443 Flexible Data Placement Supported: Not Supported 00:07:44.443 00:07:44.443 Controller Memory Buffer Support 00:07:44.443 ================================ 00:07:44.443 Supported: No 00:07:44.443 00:07:44.443 Persistent Memory Region Support 00:07:44.443 ================================ 00:07:44.443 Supported: No 00:07:44.443 00:07:44.443 Admin Command Set Attributes 00:07:44.443 ============================ 00:07:44.443 Security Send/Receive: Not Supported 00:07:44.443 Format NVM: Supported 00:07:44.443 Firmware Activate/Download: Not Supported 00:07:44.443 Namespace Management: Supported 00:07:44.443 Device Self-Test: Not Supported 00:07:44.443 Directives: Supported 00:07:44.443 NVMe-MI: Not Supported 00:07:44.443 Virtualization Management: Not Supported 00:07:44.443 Doorbell Buffer Config: Supported 00:07:44.443 Get LBA Status Capability: Not Supported 00:07:44.443 Command & Feature Lockdown Capability: Not Supported 00:07:44.443 Abort Command Limit: 4 00:07:44.444 Async Event Request Limit: 4 00:07:44.444 Number of Firmware Slots: N/A 00:07:44.444 Firmware Slot 1 Read-Only: N/A 00:07:44.444 Firmware Activation Without Reset: N/A 00:07:44.444 Multiple Update Detection Support: N/A 00:07:44.444 Firmware Update Granularity: No Information Provided 00:07:44.444 Per-Namespace SMART Log: Yes 00:07:44.444 Asymmetric Namespace Access Log Page: Not Supported 00:07:44.444 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:44.444 Command Effects Log Page: Supported 00:07:44.444 Get Log Page Extended Data: Supported 00:07:44.444 Telemetry Log Pages: Not Supported 00:07:44.444 Persistent Event Log Pages: Not Supported 00:07:44.444 Supported Log Pages Log Page: May Support 00:07:44.444 Commands Supported & Effects Log Page: Not Supported 00:07:44.444 Feature Identifiers & Effects Log Page:May Support 00:07:44.444 NVMe-MI Commands & Effects Log Page: May Support 00:07:44.444 Data Area 4 for Telemetry Log: Not Supported 00:07:44.444 Error Log Page Entries Supported: 1 00:07:44.444 Keep Alive: Not Supported 00:07:44.444 00:07:44.444 NVM Command Set Attributes 00:07:44.444 ========================== 00:07:44.444 Submission Queue Entry Size 00:07:44.444 Max: 64 00:07:44.444 Min: 64 00:07:44.444 Completion Queue Entry Size 00:07:44.444 Max: 16 00:07:44.444 Min: 16 00:07:44.444 Number of Namespaces: 256 00:07:44.444 Compare Command: Supported 00:07:44.444 Write Uncorrectable Command: Not Supported 00:07:44.444 Dataset Management Command: Supported 00:07:44.444 Write Zeroes Command: Supported 00:07:44.444 Set Features Save Field: Supported 00:07:44.444 Reservations: Not Supported 00:07:44.444 Timestamp: Supported 00:07:44.444 Copy: Supported 00:07:44.444 Volatile Write Cache: Present 00:07:44.444 Atomic Write Unit (Normal): 1 00:07:44.444 Atomic Write Unit (PFail): 1 00:07:44.444 Atomic Compare & Write Unit: 1 00:07:44.444 Fused Compare & Write: Not Supported 00:07:44.444 Scatter-Gather List 00:07:44.444 SGL Command Set: Supported 00:07:44.444 SGL Keyed: Not Supported 00:07:44.444 SGL Bit Bucket Descriptor: Not Supported 00:07:44.444 SGL Metadata Pointer: Not Supported 00:07:44.444 Oversized SGL: Not Supported 00:07:44.444 SGL Metadata Address: Not Supported 00:07:44.444 SGL Offset: Not Supported 00:07:44.444 Transport SGL Data Block: Not Supported 00:07:44.444 Replay Protected Memory Block: Not Supported 00:07:44.444 00:07:44.444 Firmware Slot Information 00:07:44.444 ========================= 00:07:44.444 Active slot: 1 00:07:44.444 Slot 1 Firmware Revision: 1.0 00:07:44.444 00:07:44.444 00:07:44.444 Commands Supported and Effects 00:07:44.444 ============================== 00:07:44.444 Admin Commands 00:07:44.444 -------------- 00:07:44.444 Delete I/O Submission Queue (00h): Supported 00:07:44.444 Create I/O Submission Queue (01h): Supported 00:07:44.444 Get Log Page (02h): Supported 00:07:44.444 Delete I/O Completion Queue (04h): Supported 00:07:44.444 Create I/O Completion Queue (05h): Supported 00:07:44.444 Identify (06h): Supported 00:07:44.444 Abort (08h): Supported 00:07:44.444 Set Features (09h): Supported 00:07:44.444 Get Features (0Ah): Supported 00:07:44.444 Asynchronous Event Request (0Ch): Supported 00:07:44.444 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:44.444 Directive Send (19h): Supported 00:07:44.444 Directive Receive (1Ah): Supported 00:07:44.444 Virtualization Management (1Ch): Supported 00:07:44.444 Doorbell Buffer Config (7Ch): Supported 00:07:44.444 Format NVM (80h): Supported LBA-Change 00:07:44.444 I/O Commands 00:07:44.444 ------------ 00:07:44.444 Flush (00h): Supported LBA-Change 00:07:44.444 Write (01h): Supported LBA-Change 00:07:44.444 Read (02h): Supported 00:07:44.444 Compare (05h): Supported 00:07:44.444 Write Zeroes (08h): Supported LBA-Change 00:07:44.444 Dataset Management (09h): Supported LBA-Change 00:07:44.444 Unknown (0Ch): Supported 00:07:44.444 Unknown (12h): Supported 00:07:44.444 Copy (19h): Supported LBA-Change 00:07:44.444 Unknown (1Dh): Supported LBA-Change 00:07:44.444 00:07:44.444 Error Log 00:07:44.444 ========= 00:07:44.444 00:07:44.444 Arbitration 00:07:44.444 =========== 00:07:44.444 Arbitration Burst: no limit 00:07:44.444 00:07:44.444 Power Management 00:07:44.444 ================ 00:07:44.444 Number of Power States: 1 00:07:44.444 Current Power State: Power State #0 00:07:44.444 Power State #0: 00:07:44.444 Max Power: 25.00 W 00:07:44.444 Non-Operational State: Operational 00:07:44.444 Entry Latency: 16 microseconds 00:07:44.444 Exit Latency: 4 microseconds 00:07:44.444 Relative Read Throughput: 0 00:07:44.444 Relative Read Latency: 0 00:07:44.444 Relative Write Throughput: 0 00:07:44.444 Relative Write Latency: 0 00:07:44.444 Idle Power: Not Reported 00:07:44.444 Active Power: Not Reported 00:07:44.444 Non-Operational Permissive Mode: Not Supported 00:07:44.444 00:07:44.444 Health Information 00:07:44.444 ================== 00:07:44.444 Critical Warnings: 00:07:44.444 Available Spare Space: OK 00:07:44.444 Temperature: OK 00:07:44.444 Device Reliability: OK 00:07:44.444 Read Only: No 00:07:44.444 Volatile Memory Backup: OK 00:07:44.444 Current Temperature: 323 Kelvin (50 Celsius) 00:07:44.444 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:44.444 Available Spare: 0% 00:07:44.444 Available Spare Threshold: 0% 00:07:44.444 Life Percentage Used: 0% 00:07:44.444 Data Units Read: 2217 00:07:44.444 Data Units Written: 2005 00:07:44.444 Host Read Commands: 117464 00:07:44.444 Host Write Commands: 115733 00:07:44.444 Controller Busy Time: 0 minutes 00:07:44.444 Power Cycles: 0 00:07:44.444 Power On Hours: 0 hours 00:07:44.444 Unsafe Shutdowns: 0 00:07:44.444 Unrecoverable Media Errors: 0 00:07:44.444 Lifetime Error Log Entries: 0 00:07:44.444 Warning Temperature Time: 0 minutes 00:07:44.444 Critical Temperature Time: 0 minutes 00:07:44.444 00:07:44.444 Number of Queues 00:07:44.444 ================ 00:07:44.444 Number of I/O Submission Queues: 64 00:07:44.444 Number of I/O Completion Queues: 64 00:07:44.444 00:07:44.444 ZNS Specific Controller Data 00:07:44.444 ============================ 00:07:44.444 Zone Append Size Limit: 0 00:07:44.444 00:07:44.444 00:07:44.444 Active Namespaces 00:07:44.444 ================= 00:07:44.444 Namespace ID:1 00:07:44.444 Error Recovery Timeout: Unlimited 00:07:44.444 Command Set Identifier: NVM (00h) 00:07:44.444 Deallocate: Supported 00:07:44.444 Deallocated/Unwritten Error: Supported 00:07:44.444 Deallocated Read Value: All 0x00 00:07:44.445 Deallocate in Write Zeroes: Not Supported 00:07:44.445 Deallocated Guard Field: 0xFFFF 00:07:44.445 Flush: Supported 00:07:44.445 Reservation: Not Supported 00:07:44.445 Namespace Sharing Capabilities: Private 00:07:44.445 Size (in LBAs): 1048576 (4GiB) 00:07:44.445 Capacity (in LBAs): 1048576 (4GiB) 00:07:44.445 Utilization (in LBAs): 1048576 (4GiB) 00:07:44.445 Thin Provisioning: Not Supported 00:07:44.445 Per-NS Atomic Units: No 00:07:44.445 Maximum Single Source Range Length: 128 00:07:44.445 Maximum Copy Length: 128 00:07:44.445 Maximum Source Range Count: 128 00:07:44.445 NGUID/EUI64 Never Reused: No 00:07:44.445 Namespace Write Protected: No 00:07:44.445 Number of LBA Formats: 8 00:07:44.445 Current LBA Format: LBA Format #04 00:07:44.445 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:44.445 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:44.445 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:44.445 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:44.445 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:44.445 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:44.445 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:44.445 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:44.445 00:07:44.445 NVM Specific Namespace Data 00:07:44.445 =========================== 00:07:44.445 Logical Block Storage Tag Mask: 0 00:07:44.445 Protection Information Capabilities: 00:07:44.445 16b Guard Protection Information Storage Tag Support: No 00:07:44.445 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:44.445 Storage Tag Check Read Support: No 00:07:44.445 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.445 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.445 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.445 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.445 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.445 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.445 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.445 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.445 Namespace ID:2 00:07:44.445 Error Recovery Timeout: Unlimited 00:07:44.445 Command Set Identifier: NVM (00h) 00:07:44.445 Deallocate: Supported 00:07:44.445 Deallocated/Unwritten Error: Supported 00:07:44.445 Deallocated Read Value: All 0x00 00:07:44.445 Deallocate in Write Zeroes: Not Supported 00:07:44.445 Deallocated Guard Field: 0xFFFF 00:07:44.445 Flush: Supported 00:07:44.445 Reservation: Not Supported 00:07:44.445 Namespace Sharing Capabilities: Private 00:07:44.445 Size (in LBAs): 1048576 (4GiB) 00:07:44.445 Capacity (in LBAs): 1048576 (4GiB) 00:07:44.445 Utilization (in LBAs): 1048576 (4GiB) 00:07:44.445 Thin Provisioning: Not Supported 00:07:44.445 Per-NS Atomic Units: No 00:07:44.445 Maximum Single Source Range Length: 128 00:07:44.445 Maximum Copy Length: 128 00:07:44.445 Maximum Source Range Count: 128 00:07:44.445 NGUID/EUI64 Never Reused: No 00:07:44.445 Namespace Write Protected: No 00:07:44.445 Number of LBA Formats: 8 00:07:44.445 Current LBA Format: LBA Format #04 00:07:44.445 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:44.445 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:44.445 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:44.445 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:44.445 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:44.445 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:44.445 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:44.445 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:44.445 00:07:44.445 NVM Specific Namespace Data 00:07:44.445 =========================== 00:07:44.445 Logical Block Storage Tag Mask: 0 00:07:44.445 Protection Information Capabilities: 00:07:44.445 16b Guard Protection Information Storage Tag Support: No 00:07:44.445 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:44.445 Storage Tag Check Read Support: No 00:07:44.445 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.445 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.445 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.445 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.445 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.445 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.445 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.445 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.445 Namespace ID:3 00:07:44.445 Error Recovery Timeout: Unlimited 00:07:44.445 Command Set Identifier: NVM (00h) 00:07:44.445 Deallocate: Supported 00:07:44.445 Deallocated/Unwritten Error: Supported 00:07:44.445 Deallocated Read Value: All 0x00 00:07:44.445 Deallocate in Write Zeroes: Not Supported 00:07:44.445 Deallocated Guard Field: 0xFFFF 00:07:44.445 Flush: Supported 00:07:44.445 Reservation: Not Supported 00:07:44.445 Namespace Sharing Capabilities: Private 00:07:44.445 Size (in LBAs): 1048576 (4GiB) 00:07:44.445 Capacity (in LBAs): 1048576 (4GiB) 00:07:44.445 Utilization (in LBAs): 1048576 (4GiB) 00:07:44.445 Thin Provisioning: Not Supported 00:07:44.445 Per-NS Atomic Units: No 00:07:44.445 Maximum Single Source Range Length: 128 00:07:44.445 Maximum Copy Length: 128 00:07:44.445 Maximum Source Range Count: 128 00:07:44.445 NGUID/EUI64 Never Reused: No 00:07:44.445 Namespace Write Protected: No 00:07:44.445 Number of LBA Formats: 8 00:07:44.445 Current LBA Format: LBA Format #04 00:07:44.445 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:44.445 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:44.445 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:44.445 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:44.445 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:44.445 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:44.445 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:44.445 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:44.445 00:07:44.445 NVM Specific Namespace Data 00:07:44.445 =========================== 00:07:44.445 Logical Block Storage Tag Mask: 0 00:07:44.445 Protection Information Capabilities: 00:07:44.445 16b Guard Protection Information Storage Tag Support: No 00:07:44.445 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:44.445 Storage Tag Check Read Support: No 00:07:44.445 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.445 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.445 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.445 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.445 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.445 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.445 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.445 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.445 18:17:02 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:44.445 18:17:02 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:07:44.707 ===================================================== 00:07:44.707 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:44.707 ===================================================== 00:07:44.707 Controller Capabilities/Features 00:07:44.707 ================================ 00:07:44.707 Vendor ID: 1b36 00:07:44.707 Subsystem Vendor ID: 1af4 00:07:44.707 Serial Number: 12340 00:07:44.707 Model Number: QEMU NVMe Ctrl 00:07:44.707 Firmware Version: 8.0.0 00:07:44.707 Recommended Arb Burst: 6 00:07:44.707 IEEE OUI Identifier: 00 54 52 00:07:44.707 Multi-path I/O 00:07:44.707 May have multiple subsystem ports: No 00:07:44.707 May have multiple controllers: No 00:07:44.707 Associated with SR-IOV VF: No 00:07:44.707 Max Data Transfer Size: 524288 00:07:44.707 Max Number of Namespaces: 256 00:07:44.707 Max Number of I/O Queues: 64 00:07:44.707 NVMe Specification Version (VS): 1.4 00:07:44.707 NVMe Specification Version (Identify): 1.4 00:07:44.707 Maximum Queue Entries: 2048 00:07:44.707 Contiguous Queues Required: Yes 00:07:44.707 Arbitration Mechanisms Supported 00:07:44.707 Weighted Round Robin: Not Supported 00:07:44.707 Vendor Specific: Not Supported 00:07:44.707 Reset Timeout: 7500 ms 00:07:44.707 Doorbell Stride: 4 bytes 00:07:44.707 NVM Subsystem Reset: Not Supported 00:07:44.707 Command Sets Supported 00:07:44.707 NVM Command Set: Supported 00:07:44.707 Boot Partition: Not Supported 00:07:44.707 Memory Page Size Minimum: 4096 bytes 00:07:44.707 Memory Page Size Maximum: 65536 bytes 00:07:44.707 Persistent Memory Region: Not Supported 00:07:44.707 Optional Asynchronous Events Supported 00:07:44.707 Namespace Attribute Notices: Supported 00:07:44.707 Firmware Activation Notices: Not Supported 00:07:44.707 ANA Change Notices: Not Supported 00:07:44.707 PLE Aggregate Log Change Notices: Not Supported 00:07:44.707 LBA Status Info Alert Notices: Not Supported 00:07:44.707 EGE Aggregate Log Change Notices: Not Supported 00:07:44.707 Normal NVM Subsystem Shutdown event: Not Supported 00:07:44.707 Zone Descriptor Change Notices: Not Supported 00:07:44.707 Discovery Log Change Notices: Not Supported 00:07:44.707 Controller Attributes 00:07:44.707 128-bit Host Identifier: Not Supported 00:07:44.707 Non-Operational Permissive Mode: Not Supported 00:07:44.707 NVM Sets: Not Supported 00:07:44.707 Read Recovery Levels: Not Supported 00:07:44.707 Endurance Groups: Not Supported 00:07:44.707 Predictable Latency Mode: Not Supported 00:07:44.707 Traffic Based Keep ALive: Not Supported 00:07:44.707 Namespace Granularity: Not Supported 00:07:44.707 SQ Associations: Not Supported 00:07:44.707 UUID List: Not Supported 00:07:44.707 Multi-Domain Subsystem: Not Supported 00:07:44.707 Fixed Capacity Management: Not Supported 00:07:44.707 Variable Capacity Management: Not Supported 00:07:44.707 Delete Endurance Group: Not Supported 00:07:44.707 Delete NVM Set: Not Supported 00:07:44.707 Extended LBA Formats Supported: Supported 00:07:44.707 Flexible Data Placement Supported: Not Supported 00:07:44.707 00:07:44.707 Controller Memory Buffer Support 00:07:44.707 ================================ 00:07:44.707 Supported: No 00:07:44.707 00:07:44.707 Persistent Memory Region Support 00:07:44.707 ================================ 00:07:44.707 Supported: No 00:07:44.707 00:07:44.707 Admin Command Set Attributes 00:07:44.707 ============================ 00:07:44.707 Security Send/Receive: Not Supported 00:07:44.707 Format NVM: Supported 00:07:44.707 Firmware Activate/Download: Not Supported 00:07:44.707 Namespace Management: Supported 00:07:44.707 Device Self-Test: Not Supported 00:07:44.707 Directives: Supported 00:07:44.707 NVMe-MI: Not Supported 00:07:44.707 Virtualization Management: Not Supported 00:07:44.707 Doorbell Buffer Config: Supported 00:07:44.707 Get LBA Status Capability: Not Supported 00:07:44.707 Command & Feature Lockdown Capability: Not Supported 00:07:44.707 Abort Command Limit: 4 00:07:44.707 Async Event Request Limit: 4 00:07:44.707 Number of Firmware Slots: N/A 00:07:44.707 Firmware Slot 1 Read-Only: N/A 00:07:44.707 Firmware Activation Without Reset: N/A 00:07:44.707 Multiple Update Detection Support: N/A 00:07:44.707 Firmware Update Granularity: No Information Provided 00:07:44.707 Per-Namespace SMART Log: Yes 00:07:44.707 Asymmetric Namespace Access Log Page: Not Supported 00:07:44.707 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:44.707 Command Effects Log Page: Supported 00:07:44.707 Get Log Page Extended Data: Supported 00:07:44.707 Telemetry Log Pages: Not Supported 00:07:44.707 Persistent Event Log Pages: Not Supported 00:07:44.707 Supported Log Pages Log Page: May Support 00:07:44.707 Commands Supported & Effects Log Page: Not Supported 00:07:44.707 Feature Identifiers & Effects Log Page:May Support 00:07:44.707 NVMe-MI Commands & Effects Log Page: May Support 00:07:44.707 Data Area 4 for Telemetry Log: Not Supported 00:07:44.707 Error Log Page Entries Supported: 1 00:07:44.707 Keep Alive: Not Supported 00:07:44.707 00:07:44.707 NVM Command Set Attributes 00:07:44.707 ========================== 00:07:44.707 Submission Queue Entry Size 00:07:44.707 Max: 64 00:07:44.707 Min: 64 00:07:44.707 Completion Queue Entry Size 00:07:44.707 Max: 16 00:07:44.707 Min: 16 00:07:44.707 Number of Namespaces: 256 00:07:44.707 Compare Command: Supported 00:07:44.707 Write Uncorrectable Command: Not Supported 00:07:44.707 Dataset Management Command: Supported 00:07:44.707 Write Zeroes Command: Supported 00:07:44.708 Set Features Save Field: Supported 00:07:44.708 Reservations: Not Supported 00:07:44.708 Timestamp: Supported 00:07:44.708 Copy: Supported 00:07:44.708 Volatile Write Cache: Present 00:07:44.708 Atomic Write Unit (Normal): 1 00:07:44.708 Atomic Write Unit (PFail): 1 00:07:44.708 Atomic Compare & Write Unit: 1 00:07:44.708 Fused Compare & Write: Not Supported 00:07:44.708 Scatter-Gather List 00:07:44.708 SGL Command Set: Supported 00:07:44.708 SGL Keyed: Not Supported 00:07:44.708 SGL Bit Bucket Descriptor: Not Supported 00:07:44.708 SGL Metadata Pointer: Not Supported 00:07:44.708 Oversized SGL: Not Supported 00:07:44.708 SGL Metadata Address: Not Supported 00:07:44.708 SGL Offset: Not Supported 00:07:44.708 Transport SGL Data Block: Not Supported 00:07:44.708 Replay Protected Memory Block: Not Supported 00:07:44.708 00:07:44.708 Firmware Slot Information 00:07:44.708 ========================= 00:07:44.708 Active slot: 1 00:07:44.708 Slot 1 Firmware Revision: 1.0 00:07:44.708 00:07:44.708 00:07:44.708 Commands Supported and Effects 00:07:44.708 ============================== 00:07:44.708 Admin Commands 00:07:44.708 -------------- 00:07:44.708 Delete I/O Submission Queue (00h): Supported 00:07:44.708 Create I/O Submission Queue (01h): Supported 00:07:44.708 Get Log Page (02h): Supported 00:07:44.708 Delete I/O Completion Queue (04h): Supported 00:07:44.708 Create I/O Completion Queue (05h): Supported 00:07:44.708 Identify (06h): Supported 00:07:44.708 Abort (08h): Supported 00:07:44.708 Set Features (09h): Supported 00:07:44.708 Get Features (0Ah): Supported 00:07:44.708 Asynchronous Event Request (0Ch): Supported 00:07:44.708 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:44.708 Directive Send (19h): Supported 00:07:44.708 Directive Receive (1Ah): Supported 00:07:44.708 Virtualization Management (1Ch): Supported 00:07:44.708 Doorbell Buffer Config (7Ch): Supported 00:07:44.708 Format NVM (80h): Supported LBA-Change 00:07:44.708 I/O Commands 00:07:44.708 ------------ 00:07:44.708 Flush (00h): Supported LBA-Change 00:07:44.708 Write (01h): Supported LBA-Change 00:07:44.708 Read (02h): Supported 00:07:44.708 Compare (05h): Supported 00:07:44.708 Write Zeroes (08h): Supported LBA-Change 00:07:44.708 Dataset Management (09h): Supported LBA-Change 00:07:44.708 Unknown (0Ch): Supported 00:07:44.708 Unknown (12h): Supported 00:07:44.708 Copy (19h): Supported LBA-Change 00:07:44.708 Unknown (1Dh): Supported LBA-Change 00:07:44.708 00:07:44.708 Error Log 00:07:44.708 ========= 00:07:44.708 00:07:44.708 Arbitration 00:07:44.708 =========== 00:07:44.708 Arbitration Burst: no limit 00:07:44.708 00:07:44.708 Power Management 00:07:44.708 ================ 00:07:44.708 Number of Power States: 1 00:07:44.708 Current Power State: Power State #0 00:07:44.708 Power State #0: 00:07:44.708 Max Power: 25.00 W 00:07:44.708 Non-Operational State: Operational 00:07:44.708 Entry Latency: 16 microseconds 00:07:44.708 Exit Latency: 4 microseconds 00:07:44.708 Relative Read Throughput: 0 00:07:44.708 Relative Read Latency: 0 00:07:44.708 Relative Write Throughput: 0 00:07:44.708 Relative Write Latency: 0 00:07:44.708 Idle Power: Not Reported 00:07:44.708 Active Power: Not Reported 00:07:44.708 Non-Operational Permissive Mode: Not Supported 00:07:44.708 00:07:44.708 Health Information 00:07:44.708 ================== 00:07:44.708 Critical Warnings: 00:07:44.708 Available Spare Space: OK 00:07:44.708 Temperature: OK 00:07:44.708 Device Reliability: OK 00:07:44.708 Read Only: No 00:07:44.708 Volatile Memory Backup: OK 00:07:44.708 Current Temperature: 323 Kelvin (50 Celsius) 00:07:44.708 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:44.708 Available Spare: 0% 00:07:44.708 Available Spare Threshold: 0% 00:07:44.708 Life Percentage Used: 0% 00:07:44.708 Data Units Read: 734 00:07:44.708 Data Units Written: 662 00:07:44.708 Host Read Commands: 38754 00:07:44.708 Host Write Commands: 38540 00:07:44.708 Controller Busy Time: 0 minutes 00:07:44.708 Power Cycles: 0 00:07:44.708 Power On Hours: 0 hours 00:07:44.708 Unsafe Shutdowns: 0 00:07:44.708 Unrecoverable Media Errors: 0 00:07:44.708 Lifetime Error Log Entries: 0 00:07:44.708 Warning Temperature Time: 0 minutes 00:07:44.708 Critical Temperature Time: 0 minutes 00:07:44.708 00:07:44.708 Number of Queues 00:07:44.708 ================ 00:07:44.708 Number of I/O Submission Queues: 64 00:07:44.708 Number of I/O Completion Queues: 64 00:07:44.708 00:07:44.708 ZNS Specific Controller Data 00:07:44.708 ============================ 00:07:44.708 Zone Append Size Limit: 0 00:07:44.708 00:07:44.708 00:07:44.708 Active Namespaces 00:07:44.708 ================= 00:07:44.708 Namespace ID:1 00:07:44.708 Error Recovery Timeout: Unlimited 00:07:44.708 Command Set Identifier: NVM (00h) 00:07:44.708 Deallocate: Supported 00:07:44.708 Deallocated/Unwritten Error: Supported 00:07:44.708 Deallocated Read Value: All 0x00 00:07:44.708 Deallocate in Write Zeroes: Not Supported 00:07:44.708 Deallocated Guard Field: 0xFFFF 00:07:44.708 Flush: Supported 00:07:44.708 Reservation: Not Supported 00:07:44.708 Metadata Transferred as: Separate Metadata Buffer 00:07:44.708 Namespace Sharing Capabilities: Private 00:07:44.708 Size (in LBAs): 1548666 (5GiB) 00:07:44.708 Capacity (in LBAs): 1548666 (5GiB) 00:07:44.708 Utilization (in LBAs): 1548666 (5GiB) 00:07:44.708 Thin Provisioning: Not Supported 00:07:44.708 Per-NS Atomic Units: No 00:07:44.708 Maximum Single Source Range Length: 128 00:07:44.708 Maximum Copy Length: 128 00:07:44.708 Maximum Source Range Count: 128 00:07:44.708 NGUID/EUI64 Never Reused: No 00:07:44.708 Namespace Write Protected: No 00:07:44.708 Number of LBA Formats: 8 00:07:44.708 Current LBA Format: LBA Format #07 00:07:44.708 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:44.708 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:44.708 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:44.708 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:44.708 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:44.708 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:44.708 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:44.708 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:44.709 00:07:44.709 NVM Specific Namespace Data 00:07:44.709 =========================== 00:07:44.709 Logical Block Storage Tag Mask: 0 00:07:44.709 Protection Information Capabilities: 00:07:44.709 16b Guard Protection Information Storage Tag Support: No 00:07:44.709 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:44.709 Storage Tag Check Read Support: No 00:07:44.709 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.709 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.709 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.709 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.709 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.709 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.709 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.709 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.709 18:17:03 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:44.709 18:17:03 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:07:44.968 ===================================================== 00:07:44.968 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:44.968 ===================================================== 00:07:44.968 Controller Capabilities/Features 00:07:44.968 ================================ 00:07:44.968 Vendor ID: 1b36 00:07:44.968 Subsystem Vendor ID: 1af4 00:07:44.968 Serial Number: 12341 00:07:44.968 Model Number: QEMU NVMe Ctrl 00:07:44.968 Firmware Version: 8.0.0 00:07:44.968 Recommended Arb Burst: 6 00:07:44.968 IEEE OUI Identifier: 00 54 52 00:07:44.968 Multi-path I/O 00:07:44.968 May have multiple subsystem ports: No 00:07:44.968 May have multiple controllers: No 00:07:44.968 Associated with SR-IOV VF: No 00:07:44.968 Max Data Transfer Size: 524288 00:07:44.968 Max Number of Namespaces: 256 00:07:44.968 Max Number of I/O Queues: 64 00:07:44.968 NVMe Specification Version (VS): 1.4 00:07:44.968 NVMe Specification Version (Identify): 1.4 00:07:44.968 Maximum Queue Entries: 2048 00:07:44.968 Contiguous Queues Required: Yes 00:07:44.968 Arbitration Mechanisms Supported 00:07:44.968 Weighted Round Robin: Not Supported 00:07:44.968 Vendor Specific: Not Supported 00:07:44.968 Reset Timeout: 7500 ms 00:07:44.968 Doorbell Stride: 4 bytes 00:07:44.968 NVM Subsystem Reset: Not Supported 00:07:44.968 Command Sets Supported 00:07:44.968 NVM Command Set: Supported 00:07:44.968 Boot Partition: Not Supported 00:07:44.968 Memory Page Size Minimum: 4096 bytes 00:07:44.968 Memory Page Size Maximum: 65536 bytes 00:07:44.968 Persistent Memory Region: Not Supported 00:07:44.968 Optional Asynchronous Events Supported 00:07:44.968 Namespace Attribute Notices: Supported 00:07:44.968 Firmware Activation Notices: Not Supported 00:07:44.968 ANA Change Notices: Not Supported 00:07:44.968 PLE Aggregate Log Change Notices: Not Supported 00:07:44.968 LBA Status Info Alert Notices: Not Supported 00:07:44.968 EGE Aggregate Log Change Notices: Not Supported 00:07:44.968 Normal NVM Subsystem Shutdown event: Not Supported 00:07:44.968 Zone Descriptor Change Notices: Not Supported 00:07:44.968 Discovery Log Change Notices: Not Supported 00:07:44.968 Controller Attributes 00:07:44.968 128-bit Host Identifier: Not Supported 00:07:44.968 Non-Operational Permissive Mode: Not Supported 00:07:44.968 NVM Sets: Not Supported 00:07:44.968 Read Recovery Levels: Not Supported 00:07:44.968 Endurance Groups: Not Supported 00:07:44.968 Predictable Latency Mode: Not Supported 00:07:44.968 Traffic Based Keep ALive: Not Supported 00:07:44.968 Namespace Granularity: Not Supported 00:07:44.968 SQ Associations: Not Supported 00:07:44.968 UUID List: Not Supported 00:07:44.968 Multi-Domain Subsystem: Not Supported 00:07:44.968 Fixed Capacity Management: Not Supported 00:07:44.968 Variable Capacity Management: Not Supported 00:07:44.968 Delete Endurance Group: Not Supported 00:07:44.968 Delete NVM Set: Not Supported 00:07:44.968 Extended LBA Formats Supported: Supported 00:07:44.968 Flexible Data Placement Supported: Not Supported 00:07:44.968 00:07:44.968 Controller Memory Buffer Support 00:07:44.968 ================================ 00:07:44.968 Supported: No 00:07:44.968 00:07:44.968 Persistent Memory Region Support 00:07:44.968 ================================ 00:07:44.968 Supported: No 00:07:44.968 00:07:44.968 Admin Command Set Attributes 00:07:44.968 ============================ 00:07:44.968 Security Send/Receive: Not Supported 00:07:44.968 Format NVM: Supported 00:07:44.968 Firmware Activate/Download: Not Supported 00:07:44.968 Namespace Management: Supported 00:07:44.968 Device Self-Test: Not Supported 00:07:44.968 Directives: Supported 00:07:44.968 NVMe-MI: Not Supported 00:07:44.968 Virtualization Management: Not Supported 00:07:44.968 Doorbell Buffer Config: Supported 00:07:44.968 Get LBA Status Capability: Not Supported 00:07:44.968 Command & Feature Lockdown Capability: Not Supported 00:07:44.968 Abort Command Limit: 4 00:07:44.968 Async Event Request Limit: 4 00:07:44.968 Number of Firmware Slots: N/A 00:07:44.968 Firmware Slot 1 Read-Only: N/A 00:07:44.968 Firmware Activation Without Reset: N/A 00:07:44.968 Multiple Update Detection Support: N/A 00:07:44.968 Firmware Update Granularity: No Information Provided 00:07:44.968 Per-Namespace SMART Log: Yes 00:07:44.968 Asymmetric Namespace Access Log Page: Not Supported 00:07:44.968 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:44.968 Command Effects Log Page: Supported 00:07:44.968 Get Log Page Extended Data: Supported 00:07:44.968 Telemetry Log Pages: Not Supported 00:07:44.968 Persistent Event Log Pages: Not Supported 00:07:44.968 Supported Log Pages Log Page: May Support 00:07:44.968 Commands Supported & Effects Log Page: Not Supported 00:07:44.968 Feature Identifiers & Effects Log Page:May Support 00:07:44.968 NVMe-MI Commands & Effects Log Page: May Support 00:07:44.968 Data Area 4 for Telemetry Log: Not Supported 00:07:44.968 Error Log Page Entries Supported: 1 00:07:44.968 Keep Alive: Not Supported 00:07:44.968 00:07:44.968 NVM Command Set Attributes 00:07:44.968 ========================== 00:07:44.968 Submission Queue Entry Size 00:07:44.968 Max: 64 00:07:44.968 Min: 64 00:07:44.968 Completion Queue Entry Size 00:07:44.968 Max: 16 00:07:44.968 Min: 16 00:07:44.968 Number of Namespaces: 256 00:07:44.968 Compare Command: Supported 00:07:44.968 Write Uncorrectable Command: Not Supported 00:07:44.968 Dataset Management Command: Supported 00:07:44.968 Write Zeroes Command: Supported 00:07:44.968 Set Features Save Field: Supported 00:07:44.968 Reservations: Not Supported 00:07:44.968 Timestamp: Supported 00:07:44.968 Copy: Supported 00:07:44.968 Volatile Write Cache: Present 00:07:44.968 Atomic Write Unit (Normal): 1 00:07:44.968 Atomic Write Unit (PFail): 1 00:07:44.968 Atomic Compare & Write Unit: 1 00:07:44.968 Fused Compare & Write: Not Supported 00:07:44.968 Scatter-Gather List 00:07:44.968 SGL Command Set: Supported 00:07:44.968 SGL Keyed: Not Supported 00:07:44.968 SGL Bit Bucket Descriptor: Not Supported 00:07:44.968 SGL Metadata Pointer: Not Supported 00:07:44.968 Oversized SGL: Not Supported 00:07:44.968 SGL Metadata Address: Not Supported 00:07:44.968 SGL Offset: Not Supported 00:07:44.968 Transport SGL Data Block: Not Supported 00:07:44.968 Replay Protected Memory Block: Not Supported 00:07:44.968 00:07:44.968 Firmware Slot Information 00:07:44.968 ========================= 00:07:44.968 Active slot: 1 00:07:44.968 Slot 1 Firmware Revision: 1.0 00:07:44.968 00:07:44.968 00:07:44.968 Commands Supported and Effects 00:07:44.968 ============================== 00:07:44.968 Admin Commands 00:07:44.968 -------------- 00:07:44.968 Delete I/O Submission Queue (00h): Supported 00:07:44.968 Create I/O Submission Queue (01h): Supported 00:07:44.968 Get Log Page (02h): Supported 00:07:44.968 Delete I/O Completion Queue (04h): Supported 00:07:44.968 Create I/O Completion Queue (05h): Supported 00:07:44.969 Identify (06h): Supported 00:07:44.969 Abort (08h): Supported 00:07:44.969 Set Features (09h): Supported 00:07:44.969 Get Features (0Ah): Supported 00:07:44.969 Asynchronous Event Request (0Ch): Supported 00:07:44.969 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:44.969 Directive Send (19h): Supported 00:07:44.969 Directive Receive (1Ah): Supported 00:07:44.969 Virtualization Management (1Ch): Supported 00:07:44.969 Doorbell Buffer Config (7Ch): Supported 00:07:44.969 Format NVM (80h): Supported LBA-Change 00:07:44.969 I/O Commands 00:07:44.969 ------------ 00:07:44.969 Flush (00h): Supported LBA-Change 00:07:44.969 Write (01h): Supported LBA-Change 00:07:44.969 Read (02h): Supported 00:07:44.969 Compare (05h): Supported 00:07:44.969 Write Zeroes (08h): Supported LBA-Change 00:07:44.969 Dataset Management (09h): Supported LBA-Change 00:07:44.969 Unknown (0Ch): Supported 00:07:44.969 Unknown (12h): Supported 00:07:44.969 Copy (19h): Supported LBA-Change 00:07:44.969 Unknown (1Dh): Supported LBA-Change 00:07:44.969 00:07:44.969 Error Log 00:07:44.969 ========= 00:07:44.969 00:07:44.969 Arbitration 00:07:44.969 =========== 00:07:44.969 Arbitration Burst: no limit 00:07:44.969 00:07:44.969 Power Management 00:07:44.969 ================ 00:07:44.969 Number of Power States: 1 00:07:44.969 Current Power State: Power State #0 00:07:44.969 Power State #0: 00:07:44.969 Max Power: 25.00 W 00:07:44.969 Non-Operational State: Operational 00:07:44.969 Entry Latency: 16 microseconds 00:07:44.969 Exit Latency: 4 microseconds 00:07:44.969 Relative Read Throughput: 0 00:07:44.969 Relative Read Latency: 0 00:07:44.969 Relative Write Throughput: 0 00:07:44.969 Relative Write Latency: 0 00:07:44.969 Idle Power: Not Reported 00:07:44.969 Active Power: Not Reported 00:07:44.969 Non-Operational Permissive Mode: Not Supported 00:07:44.969 00:07:44.969 Health Information 00:07:44.969 ================== 00:07:44.969 Critical Warnings: 00:07:44.969 Available Spare Space: OK 00:07:44.969 Temperature: OK 00:07:44.969 Device Reliability: OK 00:07:44.969 Read Only: No 00:07:44.969 Volatile Memory Backup: OK 00:07:44.969 Current Temperature: 323 Kelvin (50 Celsius) 00:07:44.969 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:44.969 Available Spare: 0% 00:07:44.969 Available Spare Threshold: 0% 00:07:44.969 Life Percentage Used: 0% 00:07:44.969 Data Units Read: 1055 00:07:44.969 Data Units Written: 923 00:07:44.969 Host Read Commands: 57202 00:07:44.969 Host Write Commands: 55999 00:07:44.969 Controller Busy Time: 0 minutes 00:07:44.969 Power Cycles: 0 00:07:44.969 Power On Hours: 0 hours 00:07:44.969 Unsafe Shutdowns: 0 00:07:44.969 Unrecoverable Media Errors: 0 00:07:44.969 Lifetime Error Log Entries: 0 00:07:44.969 Warning Temperature Time: 0 minutes 00:07:44.969 Critical Temperature Time: 0 minutes 00:07:44.969 00:07:44.969 Number of Queues 00:07:44.969 ================ 00:07:44.969 Number of I/O Submission Queues: 64 00:07:44.969 Number of I/O Completion Queues: 64 00:07:44.969 00:07:44.969 ZNS Specific Controller Data 00:07:44.969 ============================ 00:07:44.969 Zone Append Size Limit: 0 00:07:44.969 00:07:44.969 00:07:44.969 Active Namespaces 00:07:44.969 ================= 00:07:44.969 Namespace ID:1 00:07:44.969 Error Recovery Timeout: Unlimited 00:07:44.969 Command Set Identifier: NVM (00h) 00:07:44.969 Deallocate: Supported 00:07:44.969 Deallocated/Unwritten Error: Supported 00:07:44.969 Deallocated Read Value: All 0x00 00:07:44.969 Deallocate in Write Zeroes: Not Supported 00:07:44.969 Deallocated Guard Field: 0xFFFF 00:07:44.969 Flush: Supported 00:07:44.969 Reservation: Not Supported 00:07:44.969 Namespace Sharing Capabilities: Private 00:07:44.969 Size (in LBAs): 1310720 (5GiB) 00:07:44.969 Capacity (in LBAs): 1310720 (5GiB) 00:07:44.969 Utilization (in LBAs): 1310720 (5GiB) 00:07:44.969 Thin Provisioning: Not Supported 00:07:44.969 Per-NS Atomic Units: No 00:07:44.969 Maximum Single Source Range Length: 128 00:07:44.969 Maximum Copy Length: 128 00:07:44.969 Maximum Source Range Count: 128 00:07:44.969 NGUID/EUI64 Never Reused: No 00:07:44.969 Namespace Write Protected: No 00:07:44.969 Number of LBA Formats: 8 00:07:44.969 Current LBA Format: LBA Format #04 00:07:44.969 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:44.969 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:44.969 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:44.969 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:44.969 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:44.969 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:44.969 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:44.969 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:44.969 00:07:44.969 NVM Specific Namespace Data 00:07:44.969 =========================== 00:07:44.969 Logical Block Storage Tag Mask: 0 00:07:44.969 Protection Information Capabilities: 00:07:44.969 16b Guard Protection Information Storage Tag Support: No 00:07:44.969 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:44.969 Storage Tag Check Read Support: No 00:07:44.969 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.969 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.969 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.969 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.969 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.969 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.969 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.969 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:44.969 18:17:03 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:44.969 18:17:03 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:07:45.228 ===================================================== 00:07:45.228 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:45.228 ===================================================== 00:07:45.228 Controller Capabilities/Features 00:07:45.228 ================================ 00:07:45.228 Vendor ID: 1b36 00:07:45.228 Subsystem Vendor ID: 1af4 00:07:45.228 Serial Number: 12342 00:07:45.228 Model Number: QEMU NVMe Ctrl 00:07:45.228 Firmware Version: 8.0.0 00:07:45.228 Recommended Arb Burst: 6 00:07:45.228 IEEE OUI Identifier: 00 54 52 00:07:45.228 Multi-path I/O 00:07:45.228 May have multiple subsystem ports: No 00:07:45.228 May have multiple controllers: No 00:07:45.228 Associated with SR-IOV VF: No 00:07:45.228 Max Data Transfer Size: 524288 00:07:45.228 Max Number of Namespaces: 256 00:07:45.228 Max Number of I/O Queues: 64 00:07:45.228 NVMe Specification Version (VS): 1.4 00:07:45.228 NVMe Specification Version (Identify): 1.4 00:07:45.228 Maximum Queue Entries: 2048 00:07:45.228 Contiguous Queues Required: Yes 00:07:45.228 Arbitration Mechanisms Supported 00:07:45.228 Weighted Round Robin: Not Supported 00:07:45.228 Vendor Specific: Not Supported 00:07:45.228 Reset Timeout: 7500 ms 00:07:45.228 Doorbell Stride: 4 bytes 00:07:45.228 NVM Subsystem Reset: Not Supported 00:07:45.228 Command Sets Supported 00:07:45.228 NVM Command Set: Supported 00:07:45.228 Boot Partition: Not Supported 00:07:45.228 Memory Page Size Minimum: 4096 bytes 00:07:45.228 Memory Page Size Maximum: 65536 bytes 00:07:45.228 Persistent Memory Region: Not Supported 00:07:45.228 Optional Asynchronous Events Supported 00:07:45.228 Namespace Attribute Notices: Supported 00:07:45.228 Firmware Activation Notices: Not Supported 00:07:45.228 ANA Change Notices: Not Supported 00:07:45.228 PLE Aggregate Log Change Notices: Not Supported 00:07:45.228 LBA Status Info Alert Notices: Not Supported 00:07:45.228 EGE Aggregate Log Change Notices: Not Supported 00:07:45.228 Normal NVM Subsystem Shutdown event: Not Supported 00:07:45.228 Zone Descriptor Change Notices: Not Supported 00:07:45.228 Discovery Log Change Notices: Not Supported 00:07:45.228 Controller Attributes 00:07:45.228 128-bit Host Identifier: Not Supported 00:07:45.228 Non-Operational Permissive Mode: Not Supported 00:07:45.228 NVM Sets: Not Supported 00:07:45.229 Read Recovery Levels: Not Supported 00:07:45.229 Endurance Groups: Not Supported 00:07:45.229 Predictable Latency Mode: Not Supported 00:07:45.229 Traffic Based Keep ALive: Not Supported 00:07:45.229 Namespace Granularity: Not Supported 00:07:45.229 SQ Associations: Not Supported 00:07:45.229 UUID List: Not Supported 00:07:45.229 Multi-Domain Subsystem: Not Supported 00:07:45.229 Fixed Capacity Management: Not Supported 00:07:45.229 Variable Capacity Management: Not Supported 00:07:45.229 Delete Endurance Group: Not Supported 00:07:45.229 Delete NVM Set: Not Supported 00:07:45.229 Extended LBA Formats Supported: Supported 00:07:45.229 Flexible Data Placement Supported: Not Supported 00:07:45.229 00:07:45.229 Controller Memory Buffer Support 00:07:45.229 ================================ 00:07:45.229 Supported: No 00:07:45.229 00:07:45.229 Persistent Memory Region Support 00:07:45.229 ================================ 00:07:45.229 Supported: No 00:07:45.229 00:07:45.229 Admin Command Set Attributes 00:07:45.229 ============================ 00:07:45.229 Security Send/Receive: Not Supported 00:07:45.229 Format NVM: Supported 00:07:45.229 Firmware Activate/Download: Not Supported 00:07:45.229 Namespace Management: Supported 00:07:45.229 Device Self-Test: Not Supported 00:07:45.229 Directives: Supported 00:07:45.229 NVMe-MI: Not Supported 00:07:45.229 Virtualization Management: Not Supported 00:07:45.229 Doorbell Buffer Config: Supported 00:07:45.229 Get LBA Status Capability: Not Supported 00:07:45.229 Command & Feature Lockdown Capability: Not Supported 00:07:45.229 Abort Command Limit: 4 00:07:45.229 Async Event Request Limit: 4 00:07:45.229 Number of Firmware Slots: N/A 00:07:45.229 Firmware Slot 1 Read-Only: N/A 00:07:45.229 Firmware Activation Without Reset: N/A 00:07:45.229 Multiple Update Detection Support: N/A 00:07:45.229 Firmware Update Granularity: No Information Provided 00:07:45.229 Per-Namespace SMART Log: Yes 00:07:45.229 Asymmetric Namespace Access Log Page: Not Supported 00:07:45.229 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:45.229 Command Effects Log Page: Supported 00:07:45.229 Get Log Page Extended Data: Supported 00:07:45.229 Telemetry Log Pages: Not Supported 00:07:45.229 Persistent Event Log Pages: Not Supported 00:07:45.229 Supported Log Pages Log Page: May Support 00:07:45.229 Commands Supported & Effects Log Page: Not Supported 00:07:45.229 Feature Identifiers & Effects Log Page:May Support 00:07:45.229 NVMe-MI Commands & Effects Log Page: May Support 00:07:45.229 Data Area 4 for Telemetry Log: Not Supported 00:07:45.229 Error Log Page Entries Supported: 1 00:07:45.229 Keep Alive: Not Supported 00:07:45.229 00:07:45.229 NVM Command Set Attributes 00:07:45.229 ========================== 00:07:45.229 Submission Queue Entry Size 00:07:45.229 Max: 64 00:07:45.229 Min: 64 00:07:45.229 Completion Queue Entry Size 00:07:45.229 Max: 16 00:07:45.229 Min: 16 00:07:45.229 Number of Namespaces: 256 00:07:45.229 Compare Command: Supported 00:07:45.229 Write Uncorrectable Command: Not Supported 00:07:45.229 Dataset Management Command: Supported 00:07:45.229 Write Zeroes Command: Supported 00:07:45.229 Set Features Save Field: Supported 00:07:45.229 Reservations: Not Supported 00:07:45.229 Timestamp: Supported 00:07:45.229 Copy: Supported 00:07:45.229 Volatile Write Cache: Present 00:07:45.229 Atomic Write Unit (Normal): 1 00:07:45.229 Atomic Write Unit (PFail): 1 00:07:45.229 Atomic Compare & Write Unit: 1 00:07:45.229 Fused Compare & Write: Not Supported 00:07:45.229 Scatter-Gather List 00:07:45.229 SGL Command Set: Supported 00:07:45.229 SGL Keyed: Not Supported 00:07:45.229 SGL Bit Bucket Descriptor: Not Supported 00:07:45.229 SGL Metadata Pointer: Not Supported 00:07:45.229 Oversized SGL: Not Supported 00:07:45.229 SGL Metadata Address: Not Supported 00:07:45.229 SGL Offset: Not Supported 00:07:45.229 Transport SGL Data Block: Not Supported 00:07:45.229 Replay Protected Memory Block: Not Supported 00:07:45.229 00:07:45.229 Firmware Slot Information 00:07:45.229 ========================= 00:07:45.229 Active slot: 1 00:07:45.229 Slot 1 Firmware Revision: 1.0 00:07:45.229 00:07:45.229 00:07:45.229 Commands Supported and Effects 00:07:45.229 ============================== 00:07:45.229 Admin Commands 00:07:45.229 -------------- 00:07:45.229 Delete I/O Submission Queue (00h): Supported 00:07:45.229 Create I/O Submission Queue (01h): Supported 00:07:45.229 Get Log Page (02h): Supported 00:07:45.229 Delete I/O Completion Queue (04h): Supported 00:07:45.229 Create I/O Completion Queue (05h): Supported 00:07:45.229 Identify (06h): Supported 00:07:45.229 Abort (08h): Supported 00:07:45.229 Set Features (09h): Supported 00:07:45.229 Get Features (0Ah): Supported 00:07:45.229 Asynchronous Event Request (0Ch): Supported 00:07:45.229 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:45.229 Directive Send (19h): Supported 00:07:45.229 Directive Receive (1Ah): Supported 00:07:45.229 Virtualization Management (1Ch): Supported 00:07:45.229 Doorbell Buffer Config (7Ch): Supported 00:07:45.229 Format NVM (80h): Supported LBA-Change 00:07:45.229 I/O Commands 00:07:45.229 ------------ 00:07:45.229 Flush (00h): Supported LBA-Change 00:07:45.229 Write (01h): Supported LBA-Change 00:07:45.229 Read (02h): Supported 00:07:45.229 Compare (05h): Supported 00:07:45.229 Write Zeroes (08h): Supported LBA-Change 00:07:45.229 Dataset Management (09h): Supported LBA-Change 00:07:45.229 Unknown (0Ch): Supported 00:07:45.229 Unknown (12h): Supported 00:07:45.229 Copy (19h): Supported LBA-Change 00:07:45.229 Unknown (1Dh): Supported LBA-Change 00:07:45.229 00:07:45.229 Error Log 00:07:45.229 ========= 00:07:45.229 00:07:45.229 Arbitration 00:07:45.229 =========== 00:07:45.229 Arbitration Burst: no limit 00:07:45.229 00:07:45.229 Power Management 00:07:45.229 ================ 00:07:45.229 Number of Power States: 1 00:07:45.229 Current Power State: Power State #0 00:07:45.229 Power State #0: 00:07:45.229 Max Power: 25.00 W 00:07:45.229 Non-Operational State: Operational 00:07:45.229 Entry Latency: 16 microseconds 00:07:45.229 Exit Latency: 4 microseconds 00:07:45.229 Relative Read Throughput: 0 00:07:45.229 Relative Read Latency: 0 00:07:45.229 Relative Write Throughput: 0 00:07:45.229 Relative Write Latency: 0 00:07:45.229 Idle Power: Not Reported 00:07:45.229 Active Power: Not Reported 00:07:45.229 Non-Operational Permissive Mode: Not Supported 00:07:45.229 00:07:45.229 Health Information 00:07:45.229 ================== 00:07:45.229 Critical Warnings: 00:07:45.229 Available Spare Space: OK 00:07:45.229 Temperature: OK 00:07:45.229 Device Reliability: OK 00:07:45.229 Read Only: No 00:07:45.229 Volatile Memory Backup: OK 00:07:45.229 Current Temperature: 323 Kelvin (50 Celsius) 00:07:45.229 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:45.229 Available Spare: 0% 00:07:45.229 Available Spare Threshold: 0% 00:07:45.229 Life Percentage Used: 0% 00:07:45.229 Data Units Read: 2217 00:07:45.229 Data Units Written: 2005 00:07:45.229 Host Read Commands: 117464 00:07:45.229 Host Write Commands: 115733 00:07:45.229 Controller Busy Time: 0 minutes 00:07:45.229 Power Cycles: 0 00:07:45.229 Power On Hours: 0 hours 00:07:45.229 Unsafe Shutdowns: 0 00:07:45.229 Unrecoverable Media Errors: 0 00:07:45.229 Lifetime Error Log Entries: 0 00:07:45.229 Warning Temperature Time: 0 minutes 00:07:45.229 Critical Temperature Time: 0 minutes 00:07:45.229 00:07:45.229 Number of Queues 00:07:45.229 ================ 00:07:45.229 Number of I/O Submission Queues: 64 00:07:45.229 Number of I/O Completion Queues: 64 00:07:45.229 00:07:45.229 ZNS Specific Controller Data 00:07:45.229 ============================ 00:07:45.229 Zone Append Size Limit: 0 00:07:45.229 00:07:45.229 00:07:45.229 Active Namespaces 00:07:45.229 ================= 00:07:45.229 Namespace ID:1 00:07:45.229 Error Recovery Timeout: Unlimited 00:07:45.229 Command Set Identifier: NVM (00h) 00:07:45.229 Deallocate: Supported 00:07:45.229 Deallocated/Unwritten Error: Supported 00:07:45.229 Deallocated Read Value: All 0x00 00:07:45.229 Deallocate in Write Zeroes: Not Supported 00:07:45.229 Deallocated Guard Field: 0xFFFF 00:07:45.229 Flush: Supported 00:07:45.229 Reservation: Not Supported 00:07:45.229 Namespace Sharing Capabilities: Private 00:07:45.229 Size (in LBAs): 1048576 (4GiB) 00:07:45.229 Capacity (in LBAs): 1048576 (4GiB) 00:07:45.229 Utilization (in LBAs): 1048576 (4GiB) 00:07:45.229 Thin Provisioning: Not Supported 00:07:45.229 Per-NS Atomic Units: No 00:07:45.229 Maximum Single Source Range Length: 128 00:07:45.229 Maximum Copy Length: 128 00:07:45.229 Maximum Source Range Count: 128 00:07:45.229 NGUID/EUI64 Never Reused: No 00:07:45.229 Namespace Write Protected: No 00:07:45.229 Number of LBA Formats: 8 00:07:45.230 Current LBA Format: LBA Format #04 00:07:45.230 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:45.230 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:45.230 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:45.230 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:45.230 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:45.230 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:45.230 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:45.230 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:45.230 00:07:45.230 NVM Specific Namespace Data 00:07:45.230 =========================== 00:07:45.230 Logical Block Storage Tag Mask: 0 00:07:45.230 Protection Information Capabilities: 00:07:45.230 16b Guard Protection Information Storage Tag Support: No 00:07:45.230 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:45.230 Storage Tag Check Read Support: No 00:07:45.230 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.230 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.230 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.230 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.230 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.230 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.230 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.230 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.230 Namespace ID:2 00:07:45.230 Error Recovery Timeout: Unlimited 00:07:45.230 Command Set Identifier: NVM (00h) 00:07:45.230 Deallocate: Supported 00:07:45.230 Deallocated/Unwritten Error: Supported 00:07:45.230 Deallocated Read Value: All 0x00 00:07:45.230 Deallocate in Write Zeroes: Not Supported 00:07:45.230 Deallocated Guard Field: 0xFFFF 00:07:45.230 Flush: Supported 00:07:45.230 Reservation: Not Supported 00:07:45.230 Namespace Sharing Capabilities: Private 00:07:45.230 Size (in LBAs): 1048576 (4GiB) 00:07:45.230 Capacity (in LBAs): 1048576 (4GiB) 00:07:45.230 Utilization (in LBAs): 1048576 (4GiB) 00:07:45.230 Thin Provisioning: Not Supported 00:07:45.230 Per-NS Atomic Units: No 00:07:45.230 Maximum Single Source Range Length: 128 00:07:45.230 Maximum Copy Length: 128 00:07:45.230 Maximum Source Range Count: 128 00:07:45.230 NGUID/EUI64 Never Reused: No 00:07:45.230 Namespace Write Protected: No 00:07:45.230 Number of LBA Formats: 8 00:07:45.230 Current LBA Format: LBA Format #04 00:07:45.230 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:45.230 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:45.230 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:45.230 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:45.230 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:45.230 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:45.230 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:45.230 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:45.230 00:07:45.230 NVM Specific Namespace Data 00:07:45.230 =========================== 00:07:45.230 Logical Block Storage Tag Mask: 0 00:07:45.230 Protection Information Capabilities: 00:07:45.230 16b Guard Protection Information Storage Tag Support: No 00:07:45.230 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:45.230 Storage Tag Check Read Support: No 00:07:45.230 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.230 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.230 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.230 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.230 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.230 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.230 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.230 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.230 Namespace ID:3 00:07:45.230 Error Recovery Timeout: Unlimited 00:07:45.230 Command Set Identifier: NVM (00h) 00:07:45.230 Deallocate: Supported 00:07:45.230 Deallocated/Unwritten Error: Supported 00:07:45.230 Deallocated Read Value: All 0x00 00:07:45.230 Deallocate in Write Zeroes: Not Supported 00:07:45.230 Deallocated Guard Field: 0xFFFF 00:07:45.230 Flush: Supported 00:07:45.230 Reservation: Not Supported 00:07:45.230 Namespace Sharing Capabilities: Private 00:07:45.230 Size (in LBAs): 1048576 (4GiB) 00:07:45.230 Capacity (in LBAs): 1048576 (4GiB) 00:07:45.230 Utilization (in LBAs): 1048576 (4GiB) 00:07:45.230 Thin Provisioning: Not Supported 00:07:45.230 Per-NS Atomic Units: No 00:07:45.230 Maximum Single Source Range Length: 128 00:07:45.230 Maximum Copy Length: 128 00:07:45.230 Maximum Source Range Count: 128 00:07:45.230 NGUID/EUI64 Never Reused: No 00:07:45.230 Namespace Write Protected: No 00:07:45.230 Number of LBA Formats: 8 00:07:45.230 Current LBA Format: LBA Format #04 00:07:45.230 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:45.230 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:45.230 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:45.230 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:45.230 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:45.230 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:45.230 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:45.230 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:45.230 00:07:45.230 NVM Specific Namespace Data 00:07:45.230 =========================== 00:07:45.230 Logical Block Storage Tag Mask: 0 00:07:45.230 Protection Information Capabilities: 00:07:45.230 16b Guard Protection Information Storage Tag Support: No 00:07:45.230 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:45.230 Storage Tag Check Read Support: No 00:07:45.230 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.230 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.230 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.230 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.230 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.230 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.230 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.230 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.230 18:17:03 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:45.230 18:17:03 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:07:45.489 ===================================================== 00:07:45.489 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:45.489 ===================================================== 00:07:45.489 Controller Capabilities/Features 00:07:45.489 ================================ 00:07:45.489 Vendor ID: 1b36 00:07:45.489 Subsystem Vendor ID: 1af4 00:07:45.489 Serial Number: 12343 00:07:45.489 Model Number: QEMU NVMe Ctrl 00:07:45.489 Firmware Version: 8.0.0 00:07:45.489 Recommended Arb Burst: 6 00:07:45.489 IEEE OUI Identifier: 00 54 52 00:07:45.489 Multi-path I/O 00:07:45.489 May have multiple subsystem ports: No 00:07:45.489 May have multiple controllers: Yes 00:07:45.489 Associated with SR-IOV VF: No 00:07:45.489 Max Data Transfer Size: 524288 00:07:45.489 Max Number of Namespaces: 256 00:07:45.489 Max Number of I/O Queues: 64 00:07:45.489 NVMe Specification Version (VS): 1.4 00:07:45.489 NVMe Specification Version (Identify): 1.4 00:07:45.489 Maximum Queue Entries: 2048 00:07:45.489 Contiguous Queues Required: Yes 00:07:45.489 Arbitration Mechanisms Supported 00:07:45.489 Weighted Round Robin: Not Supported 00:07:45.489 Vendor Specific: Not Supported 00:07:45.489 Reset Timeout: 7500 ms 00:07:45.489 Doorbell Stride: 4 bytes 00:07:45.489 NVM Subsystem Reset: Not Supported 00:07:45.489 Command Sets Supported 00:07:45.489 NVM Command Set: Supported 00:07:45.489 Boot Partition: Not Supported 00:07:45.489 Memory Page Size Minimum: 4096 bytes 00:07:45.489 Memory Page Size Maximum: 65536 bytes 00:07:45.490 Persistent Memory Region: Not Supported 00:07:45.490 Optional Asynchronous Events Supported 00:07:45.490 Namespace Attribute Notices: Supported 00:07:45.490 Firmware Activation Notices: Not Supported 00:07:45.490 ANA Change Notices: Not Supported 00:07:45.490 PLE Aggregate Log Change Notices: Not Supported 00:07:45.490 LBA Status Info Alert Notices: Not Supported 00:07:45.490 EGE Aggregate Log Change Notices: Not Supported 00:07:45.490 Normal NVM Subsystem Shutdown event: Not Supported 00:07:45.490 Zone Descriptor Change Notices: Not Supported 00:07:45.490 Discovery Log Change Notices: Not Supported 00:07:45.490 Controller Attributes 00:07:45.490 128-bit Host Identifier: Not Supported 00:07:45.490 Non-Operational Permissive Mode: Not Supported 00:07:45.490 NVM Sets: Not Supported 00:07:45.490 Read Recovery Levels: Not Supported 00:07:45.490 Endurance Groups: Supported 00:07:45.490 Predictable Latency Mode: Not Supported 00:07:45.490 Traffic Based Keep ALive: Not Supported 00:07:45.490 Namespace Granularity: Not Supported 00:07:45.490 SQ Associations: Not Supported 00:07:45.490 UUID List: Not Supported 00:07:45.490 Multi-Domain Subsystem: Not Supported 00:07:45.490 Fixed Capacity Management: Not Supported 00:07:45.490 Variable Capacity Management: Not Supported 00:07:45.490 Delete Endurance Group: Not Supported 00:07:45.490 Delete NVM Set: Not Supported 00:07:45.490 Extended LBA Formats Supported: Supported 00:07:45.490 Flexible Data Placement Supported: Supported 00:07:45.490 00:07:45.490 Controller Memory Buffer Support 00:07:45.490 ================================ 00:07:45.490 Supported: No 00:07:45.490 00:07:45.490 Persistent Memory Region Support 00:07:45.490 ================================ 00:07:45.490 Supported: No 00:07:45.490 00:07:45.490 Admin Command Set Attributes 00:07:45.490 ============================ 00:07:45.490 Security Send/Receive: Not Supported 00:07:45.490 Format NVM: Supported 00:07:45.490 Firmware Activate/Download: Not Supported 00:07:45.490 Namespace Management: Supported 00:07:45.490 Device Self-Test: Not Supported 00:07:45.490 Directives: Supported 00:07:45.490 NVMe-MI: Not Supported 00:07:45.490 Virtualization Management: Not Supported 00:07:45.490 Doorbell Buffer Config: Supported 00:07:45.490 Get LBA Status Capability: Not Supported 00:07:45.490 Command & Feature Lockdown Capability: Not Supported 00:07:45.490 Abort Command Limit: 4 00:07:45.490 Async Event Request Limit: 4 00:07:45.490 Number of Firmware Slots: N/A 00:07:45.490 Firmware Slot 1 Read-Only: N/A 00:07:45.490 Firmware Activation Without Reset: N/A 00:07:45.490 Multiple Update Detection Support: N/A 00:07:45.490 Firmware Update Granularity: No Information Provided 00:07:45.490 Per-Namespace SMART Log: Yes 00:07:45.490 Asymmetric Namespace Access Log Page: Not Supported 00:07:45.490 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:45.490 Command Effects Log Page: Supported 00:07:45.490 Get Log Page Extended Data: Supported 00:07:45.490 Telemetry Log Pages: Not Supported 00:07:45.490 Persistent Event Log Pages: Not Supported 00:07:45.490 Supported Log Pages Log Page: May Support 00:07:45.490 Commands Supported & Effects Log Page: Not Supported 00:07:45.490 Feature Identifiers & Effects Log Page:May Support 00:07:45.490 NVMe-MI Commands & Effects Log Page: May Support 00:07:45.490 Data Area 4 for Telemetry Log: Not Supported 00:07:45.490 Error Log Page Entries Supported: 1 00:07:45.490 Keep Alive: Not Supported 00:07:45.490 00:07:45.490 NVM Command Set Attributes 00:07:45.490 ========================== 00:07:45.490 Submission Queue Entry Size 00:07:45.490 Max: 64 00:07:45.490 Min: 64 00:07:45.490 Completion Queue Entry Size 00:07:45.490 Max: 16 00:07:45.490 Min: 16 00:07:45.490 Number of Namespaces: 256 00:07:45.490 Compare Command: Supported 00:07:45.490 Write Uncorrectable Command: Not Supported 00:07:45.490 Dataset Management Command: Supported 00:07:45.490 Write Zeroes Command: Supported 00:07:45.490 Set Features Save Field: Supported 00:07:45.490 Reservations: Not Supported 00:07:45.490 Timestamp: Supported 00:07:45.490 Copy: Supported 00:07:45.490 Volatile Write Cache: Present 00:07:45.490 Atomic Write Unit (Normal): 1 00:07:45.490 Atomic Write Unit (PFail): 1 00:07:45.490 Atomic Compare & Write Unit: 1 00:07:45.490 Fused Compare & Write: Not Supported 00:07:45.490 Scatter-Gather List 00:07:45.490 SGL Command Set: Supported 00:07:45.490 SGL Keyed: Not Supported 00:07:45.490 SGL Bit Bucket Descriptor: Not Supported 00:07:45.490 SGL Metadata Pointer: Not Supported 00:07:45.490 Oversized SGL: Not Supported 00:07:45.490 SGL Metadata Address: Not Supported 00:07:45.490 SGL Offset: Not Supported 00:07:45.490 Transport SGL Data Block: Not Supported 00:07:45.490 Replay Protected Memory Block: Not Supported 00:07:45.490 00:07:45.490 Firmware Slot Information 00:07:45.490 ========================= 00:07:45.490 Active slot: 1 00:07:45.490 Slot 1 Firmware Revision: 1.0 00:07:45.490 00:07:45.490 00:07:45.490 Commands Supported and Effects 00:07:45.490 ============================== 00:07:45.490 Admin Commands 00:07:45.490 -------------- 00:07:45.490 Delete I/O Submission Queue (00h): Supported 00:07:45.490 Create I/O Submission Queue (01h): Supported 00:07:45.490 Get Log Page (02h): Supported 00:07:45.490 Delete I/O Completion Queue (04h): Supported 00:07:45.490 Create I/O Completion Queue (05h): Supported 00:07:45.490 Identify (06h): Supported 00:07:45.490 Abort (08h): Supported 00:07:45.490 Set Features (09h): Supported 00:07:45.490 Get Features (0Ah): Supported 00:07:45.490 Asynchronous Event Request (0Ch): Supported 00:07:45.490 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:45.490 Directive Send (19h): Supported 00:07:45.490 Directive Receive (1Ah): Supported 00:07:45.490 Virtualization Management (1Ch): Supported 00:07:45.490 Doorbell Buffer Config (7Ch): Supported 00:07:45.490 Format NVM (80h): Supported LBA-Change 00:07:45.490 I/O Commands 00:07:45.490 ------------ 00:07:45.490 Flush (00h): Supported LBA-Change 00:07:45.490 Write (01h): Supported LBA-Change 00:07:45.490 Read (02h): Supported 00:07:45.490 Compare (05h): Supported 00:07:45.490 Write Zeroes (08h): Supported LBA-Change 00:07:45.490 Dataset Management (09h): Supported LBA-Change 00:07:45.490 Unknown (0Ch): Supported 00:07:45.490 Unknown (12h): Supported 00:07:45.490 Copy (19h): Supported LBA-Change 00:07:45.490 Unknown (1Dh): Supported LBA-Change 00:07:45.490 00:07:45.490 Error Log 00:07:45.490 ========= 00:07:45.490 00:07:45.490 Arbitration 00:07:45.490 =========== 00:07:45.490 Arbitration Burst: no limit 00:07:45.490 00:07:45.490 Power Management 00:07:45.490 ================ 00:07:45.490 Number of Power States: 1 00:07:45.490 Current Power State: Power State #0 00:07:45.490 Power State #0: 00:07:45.490 Max Power: 25.00 W 00:07:45.490 Non-Operational State: Operational 00:07:45.490 Entry Latency: 16 microseconds 00:07:45.490 Exit Latency: 4 microseconds 00:07:45.490 Relative Read Throughput: 0 00:07:45.490 Relative Read Latency: 0 00:07:45.490 Relative Write Throughput: 0 00:07:45.490 Relative Write Latency: 0 00:07:45.490 Idle Power: Not Reported 00:07:45.490 Active Power: Not Reported 00:07:45.490 Non-Operational Permissive Mode: Not Supported 00:07:45.490 00:07:45.490 Health Information 00:07:45.490 ================== 00:07:45.490 Critical Warnings: 00:07:45.490 Available Spare Space: OK 00:07:45.490 Temperature: OK 00:07:45.490 Device Reliability: OK 00:07:45.490 Read Only: No 00:07:45.490 Volatile Memory Backup: OK 00:07:45.490 Current Temperature: 323 Kelvin (50 Celsius) 00:07:45.490 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:45.490 Available Spare: 0% 00:07:45.490 Available Spare Threshold: 0% 00:07:45.490 Life Percentage Used: 0% 00:07:45.490 Data Units Read: 825 00:07:45.490 Data Units Written: 754 00:07:45.490 Host Read Commands: 39939 00:07:45.490 Host Write Commands: 39362 00:07:45.490 Controller Busy Time: 0 minutes 00:07:45.490 Power Cycles: 0 00:07:45.490 Power On Hours: 0 hours 00:07:45.490 Unsafe Shutdowns: 0 00:07:45.490 Unrecoverable Media Errors: 0 00:07:45.490 Lifetime Error Log Entries: 0 00:07:45.490 Warning Temperature Time: 0 minutes 00:07:45.490 Critical Temperature Time: 0 minutes 00:07:45.490 00:07:45.490 Number of Queues 00:07:45.490 ================ 00:07:45.490 Number of I/O Submission Queues: 64 00:07:45.490 Number of I/O Completion Queues: 64 00:07:45.490 00:07:45.490 ZNS Specific Controller Data 00:07:45.490 ============================ 00:07:45.490 Zone Append Size Limit: 0 00:07:45.490 00:07:45.490 00:07:45.490 Active Namespaces 00:07:45.490 ================= 00:07:45.491 Namespace ID:1 00:07:45.491 Error Recovery Timeout: Unlimited 00:07:45.491 Command Set Identifier: NVM (00h) 00:07:45.491 Deallocate: Supported 00:07:45.491 Deallocated/Unwritten Error: Supported 00:07:45.491 Deallocated Read Value: All 0x00 00:07:45.491 Deallocate in Write Zeroes: Not Supported 00:07:45.491 Deallocated Guard Field: 0xFFFF 00:07:45.491 Flush: Supported 00:07:45.491 Reservation: Not Supported 00:07:45.491 Namespace Sharing Capabilities: Multiple Controllers 00:07:45.491 Size (in LBAs): 262144 (1GiB) 00:07:45.491 Capacity (in LBAs): 262144 (1GiB) 00:07:45.491 Utilization (in LBAs): 262144 (1GiB) 00:07:45.491 Thin Provisioning: Not Supported 00:07:45.491 Per-NS Atomic Units: No 00:07:45.491 Maximum Single Source Range Length: 128 00:07:45.491 Maximum Copy Length: 128 00:07:45.491 Maximum Source Range Count: 128 00:07:45.491 NGUID/EUI64 Never Reused: No 00:07:45.491 Namespace Write Protected: No 00:07:45.491 Endurance group ID: 1 00:07:45.491 Number of LBA Formats: 8 00:07:45.491 Current LBA Format: LBA Format #04 00:07:45.491 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:45.491 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:45.491 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:45.491 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:45.491 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:45.491 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:45.491 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:45.491 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:45.491 00:07:45.491 Get Feature FDP: 00:07:45.491 ================ 00:07:45.491 Enabled: Yes 00:07:45.491 FDP configuration index: 0 00:07:45.491 00:07:45.491 FDP configurations log page 00:07:45.491 =========================== 00:07:45.491 Number of FDP configurations: 1 00:07:45.491 Version: 0 00:07:45.491 Size: 112 00:07:45.491 FDP Configuration Descriptor: 0 00:07:45.491 Descriptor Size: 96 00:07:45.491 Reclaim Group Identifier format: 2 00:07:45.491 FDP Volatile Write Cache: Not Present 00:07:45.491 FDP Configuration: Valid 00:07:45.491 Vendor Specific Size: 0 00:07:45.491 Number of Reclaim Groups: 2 00:07:45.491 Number of Recalim Unit Handles: 8 00:07:45.491 Max Placement Identifiers: 128 00:07:45.491 Number of Namespaces Suppprted: 256 00:07:45.491 Reclaim unit Nominal Size: 6000000 bytes 00:07:45.491 Estimated Reclaim Unit Time Limit: Not Reported 00:07:45.491 RUH Desc #000: RUH Type: Initially Isolated 00:07:45.491 RUH Desc #001: RUH Type: Initially Isolated 00:07:45.491 RUH Desc #002: RUH Type: Initially Isolated 00:07:45.491 RUH Desc #003: RUH Type: Initially Isolated 00:07:45.491 RUH Desc #004: RUH Type: Initially Isolated 00:07:45.491 RUH Desc #005: RUH Type: Initially Isolated 00:07:45.491 RUH Desc #006: RUH Type: Initially Isolated 00:07:45.491 RUH Desc #007: RUH Type: Initially Isolated 00:07:45.491 00:07:45.491 FDP reclaim unit handle usage log page 00:07:45.491 ====================================== 00:07:45.491 Number of Reclaim Unit Handles: 8 00:07:45.491 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:45.491 RUH Usage Desc #001: RUH Attributes: Unused 00:07:45.491 RUH Usage Desc #002: RUH Attributes: Unused 00:07:45.491 RUH Usage Desc #003: RUH Attributes: Unused 00:07:45.491 RUH Usage Desc #004: RUH Attributes: Unused 00:07:45.491 RUH Usage Desc #005: RUH Attributes: Unused 00:07:45.491 RUH Usage Desc #006: RUH Attributes: Unused 00:07:45.491 RUH Usage Desc #007: RUH Attributes: Unused 00:07:45.491 00:07:45.491 FDP statistics log page 00:07:45.491 ======================= 00:07:45.491 Host bytes with metadata written: 486514688 00:07:45.491 Media bytes with metadata written: 486559744 00:07:45.491 Media bytes erased: 0 00:07:45.491 00:07:45.491 FDP events log page 00:07:45.491 =================== 00:07:45.491 Number of FDP events: 0 00:07:45.491 00:07:45.491 NVM Specific Namespace Data 00:07:45.491 =========================== 00:07:45.491 Logical Block Storage Tag Mask: 0 00:07:45.491 Protection Information Capabilities: 00:07:45.491 16b Guard Protection Information Storage Tag Support: No 00:07:45.491 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:45.491 Storage Tag Check Read Support: No 00:07:45.491 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.491 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.491 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.491 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.491 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.491 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.491 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.491 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:45.491 00:07:45.491 real 0m1.215s 00:07:45.491 user 0m0.465s 00:07:45.491 sys 0m0.527s 00:07:45.491 18:17:03 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:45.491 ************************************ 00:07:45.491 END TEST nvme_identify 00:07:45.491 ************************************ 00:07:45.491 18:17:03 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:07:45.491 18:17:03 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:07:45.491 18:17:03 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:45.491 18:17:03 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:45.491 18:17:03 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:45.491 ************************************ 00:07:45.491 START TEST nvme_perf 00:07:45.491 ************************************ 00:07:45.491 18:17:03 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:07:45.491 18:17:03 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:07:46.875 Initializing NVMe Controllers 00:07:46.875 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:46.875 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:46.875 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:46.875 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:46.875 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:46.875 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:46.875 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:46.875 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:46.875 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:46.875 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:46.875 Initialization complete. Launching workers. 00:07:46.875 ======================================================== 00:07:46.875 Latency(us) 00:07:46.875 Device Information : IOPS MiB/s Average min max 00:07:46.875 PCIE (0000:00:13.0) NSID 1 from core 0: 12950.80 151.77 9900.59 5831.85 30816.03 00:07:46.875 PCIE (0000:00:10.0) NSID 1 from core 0: 12950.80 151.77 9887.11 5769.37 29398.64 00:07:46.875 PCIE (0000:00:11.0) NSID 1 from core 0: 12950.80 151.77 9873.62 5899.39 27644.24 00:07:46.875 PCIE (0000:00:12.0) NSID 1 from core 0: 12950.80 151.77 9858.46 5899.73 26483.22 00:07:46.875 PCIE (0000:00:12.0) NSID 2 from core 0: 12950.80 151.77 9843.79 5854.27 24756.29 00:07:46.875 PCIE (0000:00:12.0) NSID 3 from core 0: 13014.60 152.51 9781.08 5854.19 19072.25 00:07:46.875 ======================================================== 00:07:46.875 Total : 77768.62 911.35 9857.38 5769.37 30816.03 00:07:46.875 00:07:46.875 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:46.875 ================================================================================= 00:07:46.875 1.00000% : 6251.126us 00:07:46.875 10.00000% : 7108.135us 00:07:46.875 25.00000% : 7612.258us 00:07:46.875 50.00000% : 8267.618us 00:07:46.875 75.00000% : 11998.129us 00:07:46.875 90.00000% : 13913.797us 00:07:46.875 95.00000% : 14821.218us 00:07:46.875 98.00000% : 17039.360us 00:07:46.875 99.00000% : 18652.554us 00:07:46.875 99.50000% : 26012.751us 00:07:46.875 99.90000% : 30650.683us 00:07:46.875 99.99000% : 30852.332us 00:07:46.875 99.99900% : 30852.332us 00:07:46.875 99.99990% : 30852.332us 00:07:46.875 99.99999% : 30852.332us 00:07:46.875 00:07:46.875 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:46.875 ================================================================================= 00:07:46.875 1.00000% : 6200.714us 00:07:46.875 10.00000% : 7108.135us 00:07:46.875 25.00000% : 7612.258us 00:07:46.875 50.00000% : 8318.031us 00:07:46.875 75.00000% : 12048.542us 00:07:46.875 90.00000% : 13812.972us 00:07:46.875 95.00000% : 15022.868us 00:07:46.875 98.00000% : 17140.185us 00:07:46.875 99.00000% : 19358.326us 00:07:46.875 99.50000% : 24500.382us 00:07:46.875 99.90000% : 29239.138us 00:07:46.875 99.99000% : 29440.788us 00:07:46.875 99.99900% : 29440.788us 00:07:46.875 99.99990% : 29440.788us 00:07:46.875 99.99999% : 29440.788us 00:07:46.875 00:07:46.875 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:46.875 ================================================================================= 00:07:46.875 1.00000% : 6301.538us 00:07:46.876 10.00000% : 7108.135us 00:07:46.876 25.00000% : 7612.258us 00:07:46.876 50.00000% : 8267.618us 00:07:46.876 75.00000% : 12149.366us 00:07:46.876 90.00000% : 13812.972us 00:07:46.876 95.00000% : 15022.868us 00:07:46.876 98.00000% : 16938.535us 00:07:46.876 99.00000% : 18450.905us 00:07:46.876 99.50000% : 22685.538us 00:07:46.876 99.90000% : 27424.295us 00:07:46.876 99.99000% : 27625.945us 00:07:46.876 99.99900% : 27827.594us 00:07:46.876 99.99990% : 27827.594us 00:07:46.876 99.99999% : 27827.594us 00:07:46.876 00:07:46.876 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:46.876 ================================================================================= 00:07:46.876 1.00000% : 6276.332us 00:07:46.876 10.00000% : 7158.548us 00:07:46.876 25.00000% : 7612.258us 00:07:46.876 50.00000% : 8267.618us 00:07:46.876 75.00000% : 12098.954us 00:07:46.876 90.00000% : 13812.972us 00:07:46.876 95.00000% : 15022.868us 00:07:46.876 98.00000% : 16636.062us 00:07:46.876 99.00000% : 18551.729us 00:07:46.876 99.50000% : 21374.818us 00:07:46.876 99.90000% : 26214.400us 00:07:46.876 99.99000% : 26617.698us 00:07:46.876 99.99900% : 26617.698us 00:07:46.876 99.99990% : 26617.698us 00:07:46.876 99.99999% : 26617.698us 00:07:46.876 00:07:46.876 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:46.876 ================================================================================= 00:07:46.876 1.00000% : 6276.332us 00:07:46.876 10.00000% : 7158.548us 00:07:46.876 25.00000% : 7612.258us 00:07:46.876 50.00000% : 8267.618us 00:07:46.876 75.00000% : 12149.366us 00:07:46.876 90.00000% : 13913.797us 00:07:46.876 95.00000% : 14922.043us 00:07:46.876 98.00000% : 16837.711us 00:07:46.876 99.00000% : 18249.255us 00:07:46.876 99.50000% : 19660.800us 00:07:46.876 99.90000% : 24500.382us 00:07:46.876 99.99000% : 24802.855us 00:07:46.876 99.99900% : 24802.855us 00:07:46.876 99.99990% : 24802.855us 00:07:46.876 99.99999% : 24802.855us 00:07:46.876 00:07:46.876 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:46.876 ================================================================================= 00:07:46.876 1.00000% : 6276.332us 00:07:46.876 10.00000% : 7158.548us 00:07:46.876 25.00000% : 7612.258us 00:07:46.876 50.00000% : 8267.618us 00:07:46.876 75.00000% : 11998.129us 00:07:46.876 90.00000% : 13913.797us 00:07:46.876 95.00000% : 14720.394us 00:07:46.876 98.00000% : 15930.289us 00:07:46.876 99.00000% : 17543.483us 00:07:46.876 99.50000% : 17946.782us 00:07:46.876 99.90000% : 18854.203us 00:07:46.876 99.99000% : 19055.852us 00:07:46.876 99.99900% : 19156.677us 00:07:46.876 99.99990% : 19156.677us 00:07:46.876 99.99999% : 19156.677us 00:07:46.876 00:07:46.876 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:46.876 ============================================================================== 00:07:46.876 Range in us Cumulative IO count 00:07:46.876 5822.622 - 5847.828: 0.0231% ( 3) 00:07:46.876 5847.828 - 5873.034: 0.0539% ( 4) 00:07:46.876 5873.034 - 5898.240: 0.0847% ( 4) 00:07:46.876 5898.240 - 5923.446: 0.1078% ( 3) 00:07:46.876 5923.446 - 5948.652: 0.1385% ( 4) 00:07:46.876 5948.652 - 5973.858: 0.1693% ( 4) 00:07:46.876 5973.858 - 5999.065: 0.2001% ( 4) 00:07:46.876 5999.065 - 6024.271: 0.2386% ( 5) 00:07:46.876 6024.271 - 6049.477: 0.2848% ( 6) 00:07:46.876 6049.477 - 6074.683: 0.3156% ( 4) 00:07:46.876 6074.683 - 6099.889: 0.3925% ( 10) 00:07:46.876 6099.889 - 6125.095: 0.4772% ( 11) 00:07:46.876 6125.095 - 6150.302: 0.5542% ( 10) 00:07:46.876 6150.302 - 6175.508: 0.6466% ( 12) 00:07:46.876 6175.508 - 6200.714: 0.7697% ( 16) 00:07:46.876 6200.714 - 6225.920: 0.8775% ( 14) 00:07:46.876 6225.920 - 6251.126: 1.0083% ( 17) 00:07:46.876 6251.126 - 6276.332: 1.1392% ( 17) 00:07:46.876 6276.332 - 6301.538: 1.2700% ( 17) 00:07:46.876 6301.538 - 6326.745: 1.4470% ( 23) 00:07:46.876 6326.745 - 6351.951: 1.5625% ( 15) 00:07:46.876 6351.951 - 6377.157: 1.7087% ( 19) 00:07:46.876 6377.157 - 6402.363: 1.8704% ( 21) 00:07:46.876 6402.363 - 6427.569: 2.0397% ( 22) 00:07:46.876 6427.569 - 6452.775: 2.2860% ( 32) 00:07:46.876 6452.775 - 6503.188: 2.7786% ( 64) 00:07:46.876 6503.188 - 6553.600: 3.1789% ( 52) 00:07:46.876 6553.600 - 6604.012: 3.5945% ( 54) 00:07:46.876 6604.012 - 6654.425: 4.1641% ( 74) 00:07:46.876 6654.425 - 6704.837: 4.6798% ( 67) 00:07:46.876 6704.837 - 6755.249: 5.2802% ( 78) 00:07:46.876 6755.249 - 6805.662: 5.8267% ( 71) 00:07:46.876 6805.662 - 6856.074: 6.5040% ( 88) 00:07:46.876 6856.074 - 6906.486: 7.2198% ( 93) 00:07:46.876 6906.486 - 6956.898: 7.8818% ( 86) 00:07:46.876 6956.898 - 7007.311: 8.6438% ( 99) 00:07:46.876 7007.311 - 7057.723: 9.4520% ( 105) 00:07:46.876 7057.723 - 7108.135: 10.2602% ( 105) 00:07:46.876 7108.135 - 7158.548: 11.0914% ( 108) 00:07:46.876 7158.548 - 7208.960: 12.0228% ( 121) 00:07:46.876 7208.960 - 7259.372: 13.0542% ( 134) 00:07:46.876 7259.372 - 7309.785: 14.3165% ( 164) 00:07:46.876 7309.785 - 7360.197: 15.8713% ( 202) 00:07:46.876 7360.197 - 7410.609: 17.6031% ( 225) 00:07:46.876 7410.609 - 7461.022: 19.4581% ( 241) 00:07:46.876 7461.022 - 7511.434: 21.5363% ( 270) 00:07:46.876 7511.434 - 7561.846: 23.8839% ( 305) 00:07:46.876 7561.846 - 7612.258: 26.1007% ( 288) 00:07:46.876 7612.258 - 7662.671: 28.2635% ( 281) 00:07:46.876 7662.671 - 7713.083: 30.4726% ( 287) 00:07:46.876 7713.083 - 7763.495: 32.6740% ( 286) 00:07:46.876 7763.495 - 7813.908: 34.8445% ( 282) 00:07:46.876 7813.908 - 7864.320: 36.9381% ( 272) 00:07:46.876 7864.320 - 7914.732: 38.9393% ( 260) 00:07:46.876 7914.732 - 7965.145: 40.9714% ( 264) 00:07:46.876 7965.145 - 8015.557: 43.0342% ( 268) 00:07:46.876 8015.557 - 8065.969: 44.8045% ( 230) 00:07:46.876 8065.969 - 8116.382: 46.4825% ( 218) 00:07:46.876 8116.382 - 8166.794: 47.9449% ( 190) 00:07:46.876 8166.794 - 8217.206: 49.1533% ( 157) 00:07:46.876 8217.206 - 8267.618: 50.1770% ( 133) 00:07:46.876 8267.618 - 8318.031: 50.9544% ( 101) 00:07:46.876 8318.031 - 8368.443: 51.5394% ( 76) 00:07:46.876 8368.443 - 8418.855: 51.9704% ( 56) 00:07:46.876 8418.855 - 8469.268: 52.3938% ( 55) 00:07:46.876 8469.268 - 8519.680: 52.8171% ( 55) 00:07:46.876 8519.680 - 8570.092: 53.1558% ( 44) 00:07:46.876 8570.092 - 8620.505: 53.4252% ( 35) 00:07:46.876 8620.505 - 8670.917: 53.7100% ( 37) 00:07:46.876 8670.917 - 8721.329: 53.9563% ( 32) 00:07:46.876 8721.329 - 8771.742: 54.2257% ( 35) 00:07:46.876 8771.742 - 8822.154: 54.4951% ( 35) 00:07:46.876 8822.154 - 8872.566: 54.7645% ( 35) 00:07:46.876 8872.566 - 8922.978: 55.0262% ( 34) 00:07:46.876 8922.978 - 8973.391: 55.2648% ( 31) 00:07:46.876 8973.391 - 9023.803: 55.5034% ( 31) 00:07:46.876 9023.803 - 9074.215: 55.7420% ( 31) 00:07:46.876 9074.215 - 9124.628: 55.9806% ( 31) 00:07:46.876 9124.628 - 9175.040: 56.1884% ( 27) 00:07:46.876 9175.040 - 9225.452: 56.3962% ( 27) 00:07:46.876 9225.452 - 9275.865: 56.6349% ( 31) 00:07:46.876 9275.865 - 9326.277: 56.8504% ( 28) 00:07:46.876 9326.277 - 9376.689: 57.0736% ( 29) 00:07:46.876 9376.689 - 9427.102: 57.2660% ( 25) 00:07:46.876 9427.102 - 9477.514: 57.4046% ( 18) 00:07:46.876 9477.514 - 9527.926: 57.5431% ( 18) 00:07:46.876 9527.926 - 9578.338: 57.6893% ( 19) 00:07:46.876 9578.338 - 9628.751: 57.8587% ( 22) 00:07:46.876 9628.751 - 9679.163: 57.9818% ( 16) 00:07:46.876 9679.163 - 9729.575: 58.1512% ( 22) 00:07:46.876 9729.575 - 9779.988: 58.3359% ( 24) 00:07:46.876 9779.988 - 9830.400: 58.5206% ( 24) 00:07:46.876 9830.400 - 9880.812: 58.7438% ( 29) 00:07:46.876 9880.812 - 9931.225: 58.9825% ( 31) 00:07:46.876 9931.225 - 9981.637: 59.1826% ( 26) 00:07:46.876 9981.637 - 10032.049: 59.3519% ( 22) 00:07:46.876 10032.049 - 10082.462: 59.5135% ( 21) 00:07:46.876 10082.462 - 10132.874: 59.7137% ( 26) 00:07:46.876 10132.874 - 10183.286: 59.8907% ( 23) 00:07:46.876 10183.286 - 10233.698: 60.0754% ( 24) 00:07:46.876 10233.698 - 10284.111: 60.3371% ( 34) 00:07:46.876 10284.111 - 10334.523: 60.5911% ( 33) 00:07:46.876 10334.523 - 10384.935: 60.9144% ( 42) 00:07:46.876 10384.935 - 10435.348: 61.2146% ( 39) 00:07:46.876 10435.348 - 10485.760: 61.5533% ( 44) 00:07:46.876 10485.760 - 10536.172: 61.8919% ( 44) 00:07:46.876 10536.172 - 10586.585: 62.3076% ( 54) 00:07:46.876 10586.585 - 10636.997: 62.6847% ( 49) 00:07:46.876 10636.997 - 10687.409: 63.0773% ( 51) 00:07:46.876 10687.409 - 10737.822: 63.4390% ( 47) 00:07:46.876 10737.822 - 10788.234: 63.7700% ( 43) 00:07:46.876 10788.234 - 10838.646: 64.1626% ( 51) 00:07:46.876 10838.646 - 10889.058: 64.5320% ( 48) 00:07:46.876 10889.058 - 10939.471: 64.8938% ( 47) 00:07:46.877 10939.471 - 10989.883: 65.2555% ( 47) 00:07:46.877 10989.883 - 11040.295: 65.6327% ( 49) 00:07:46.877 11040.295 - 11090.708: 66.0329% ( 52) 00:07:46.877 11090.708 - 11141.120: 66.4409% ( 53) 00:07:46.877 11141.120 - 11191.532: 66.8642% ( 55) 00:07:46.877 11191.532 - 11241.945: 67.3722% ( 66) 00:07:46.877 11241.945 - 11292.357: 67.8802% ( 66) 00:07:46.877 11292.357 - 11342.769: 68.4113% ( 69) 00:07:46.877 11342.769 - 11393.182: 68.9193% ( 66) 00:07:46.877 11393.182 - 11443.594: 69.3966% ( 62) 00:07:46.877 11443.594 - 11494.006: 69.8507% ( 59) 00:07:46.877 11494.006 - 11544.418: 70.3279% ( 62) 00:07:46.877 11544.418 - 11594.831: 70.7820% ( 59) 00:07:46.877 11594.831 - 11645.243: 71.2515% ( 61) 00:07:46.877 11645.243 - 11695.655: 71.7211% ( 61) 00:07:46.877 11695.655 - 11746.068: 72.2445% ( 68) 00:07:46.877 11746.068 - 11796.480: 72.7756% ( 69) 00:07:46.877 11796.480 - 11846.892: 73.3374% ( 73) 00:07:46.877 11846.892 - 11897.305: 73.9609% ( 81) 00:07:46.877 11897.305 - 11947.717: 74.5844% ( 81) 00:07:46.877 11947.717 - 11998.129: 75.0616% ( 62) 00:07:46.877 11998.129 - 12048.542: 75.5850% ( 68) 00:07:46.877 12048.542 - 12098.954: 76.0776% ( 64) 00:07:46.877 12098.954 - 12149.366: 76.4701% ( 51) 00:07:46.877 12149.366 - 12199.778: 76.8704% ( 52) 00:07:46.877 12199.778 - 12250.191: 77.2629% ( 51) 00:07:46.877 12250.191 - 12300.603: 77.6863% ( 55) 00:07:46.877 12300.603 - 12351.015: 78.0403% ( 46) 00:07:46.877 12351.015 - 12401.428: 78.4714% ( 56) 00:07:46.877 12401.428 - 12451.840: 78.9024% ( 56) 00:07:46.877 12451.840 - 12502.252: 79.2873% ( 50) 00:07:46.877 12502.252 - 12552.665: 79.6413% ( 46) 00:07:46.877 12552.665 - 12603.077: 79.9723% ( 43) 00:07:46.877 12603.077 - 12653.489: 80.3341% ( 47) 00:07:46.877 12653.489 - 12703.902: 80.6650% ( 43) 00:07:46.877 12703.902 - 12754.314: 81.0499% ( 50) 00:07:46.877 12754.314 - 12804.726: 81.4347% ( 50) 00:07:46.877 12804.726 - 12855.138: 81.8119% ( 49) 00:07:46.877 12855.138 - 12905.551: 82.1736% ( 47) 00:07:46.877 12905.551 - 13006.375: 82.9126% ( 96) 00:07:46.877 13006.375 - 13107.200: 83.7977% ( 115) 00:07:46.877 13107.200 - 13208.025: 84.7291% ( 121) 00:07:46.877 13208.025 - 13308.849: 85.6296% ( 117) 00:07:46.877 13308.849 - 13409.674: 86.5379% ( 118) 00:07:46.877 13409.674 - 13510.498: 87.3922% ( 111) 00:07:46.877 13510.498 - 13611.323: 88.2312% ( 109) 00:07:46.877 13611.323 - 13712.148: 89.0009% ( 100) 00:07:46.877 13712.148 - 13812.972: 89.7475% ( 97) 00:07:46.877 13812.972 - 13913.797: 90.3325% ( 76) 00:07:46.877 13913.797 - 14014.622: 90.9714% ( 83) 00:07:46.877 14014.622 - 14115.446: 91.5563% ( 76) 00:07:46.877 14115.446 - 14216.271: 92.1028% ( 71) 00:07:46.877 14216.271 - 14317.095: 92.6801% ( 75) 00:07:46.877 14317.095 - 14417.920: 93.3036% ( 81) 00:07:46.877 14417.920 - 14518.745: 93.8732% ( 74) 00:07:46.877 14518.745 - 14619.569: 94.3504% ( 62) 00:07:46.877 14619.569 - 14720.394: 94.8199% ( 61) 00:07:46.877 14720.394 - 14821.218: 95.2509% ( 56) 00:07:46.877 14821.218 - 14922.043: 95.6666% ( 54) 00:07:46.877 14922.043 - 15022.868: 96.0129% ( 45) 00:07:46.877 15022.868 - 15123.692: 96.3131% ( 39) 00:07:46.877 15123.692 - 15224.517: 96.5902% ( 36) 00:07:46.877 15224.517 - 15325.342: 96.7518% ( 21) 00:07:46.877 15325.342 - 15426.166: 96.8750% ( 16) 00:07:46.877 15426.166 - 15526.991: 96.9828% ( 14) 00:07:46.877 15526.991 - 15627.815: 97.0443% ( 8) 00:07:46.877 15627.815 - 15728.640: 97.0674% ( 3) 00:07:46.877 15728.640 - 15829.465: 97.1444% ( 10) 00:07:46.877 15829.465 - 15930.289: 97.1983% ( 7) 00:07:46.877 15930.289 - 16031.114: 97.2599% ( 8) 00:07:46.877 16031.114 - 16131.938: 97.3445% ( 11) 00:07:46.877 16131.938 - 16232.763: 97.4061% ( 8) 00:07:46.877 16232.763 - 16333.588: 97.4446% ( 5) 00:07:46.877 16333.588 - 16434.412: 97.4831% ( 5) 00:07:46.877 16434.412 - 16535.237: 97.5292% ( 6) 00:07:46.877 16535.237 - 16636.062: 97.5600% ( 4) 00:07:46.877 16636.062 - 16736.886: 97.6678% ( 14) 00:07:46.877 16736.886 - 16837.711: 97.7833% ( 15) 00:07:46.877 16837.711 - 16938.535: 97.8910% ( 14) 00:07:46.877 16938.535 - 17039.360: 98.0219% ( 17) 00:07:46.877 17039.360 - 17140.185: 98.1219% ( 13) 00:07:46.877 17140.185 - 17241.009: 98.2220% ( 13) 00:07:46.877 17241.009 - 17341.834: 98.3143% ( 12) 00:07:46.877 17341.834 - 17442.658: 98.3990% ( 11) 00:07:46.877 17442.658 - 17543.483: 98.4452% ( 6) 00:07:46.877 17543.483 - 17644.308: 98.4914% ( 6) 00:07:46.877 17644.308 - 17745.132: 98.5760% ( 11) 00:07:46.877 17745.132 - 17845.957: 98.6299% ( 7) 00:07:46.877 17845.957 - 17946.782: 98.6761% ( 6) 00:07:46.877 17946.782 - 18047.606: 98.7223% ( 6) 00:07:46.877 18047.606 - 18148.431: 98.7685% ( 6) 00:07:46.877 18148.431 - 18249.255: 98.8147% ( 6) 00:07:46.877 18249.255 - 18350.080: 98.8685% ( 7) 00:07:46.877 18350.080 - 18450.905: 98.9224% ( 7) 00:07:46.877 18450.905 - 18551.729: 98.9686% ( 6) 00:07:46.877 18551.729 - 18652.554: 99.0071% ( 5) 00:07:46.877 18652.554 - 18753.378: 99.0148% ( 1) 00:07:46.877 24298.732 - 24399.557: 99.0225% ( 1) 00:07:46.877 24399.557 - 24500.382: 99.0533% ( 4) 00:07:46.877 24500.382 - 24601.206: 99.0917% ( 5) 00:07:46.877 24601.206 - 24702.031: 99.1302% ( 5) 00:07:46.877 24702.031 - 24802.855: 99.1610% ( 4) 00:07:46.877 24802.855 - 24903.680: 99.1918% ( 4) 00:07:46.877 24903.680 - 25004.505: 99.2226% ( 4) 00:07:46.877 25004.505 - 25105.329: 99.2611% ( 5) 00:07:46.877 25105.329 - 25206.154: 99.2919% ( 4) 00:07:46.877 25206.154 - 25306.978: 99.3227% ( 4) 00:07:46.877 25306.978 - 25407.803: 99.3534% ( 4) 00:07:46.877 25407.803 - 25508.628: 99.3842% ( 4) 00:07:46.877 25508.628 - 25609.452: 99.4227% ( 5) 00:07:46.877 25609.452 - 25710.277: 99.4612% ( 5) 00:07:46.877 25710.277 - 25811.102: 99.4920% ( 4) 00:07:46.877 25811.102 - 26012.751: 99.5074% ( 2) 00:07:46.877 29239.138 - 29440.788: 99.5382% ( 4) 00:07:46.877 29440.788 - 29642.437: 99.5998% ( 8) 00:07:46.877 29642.437 - 29844.086: 99.6690% ( 9) 00:07:46.877 29844.086 - 30045.735: 99.7383% ( 9) 00:07:46.877 30045.735 - 30247.385: 99.8076% ( 9) 00:07:46.877 30247.385 - 30449.034: 99.8768% ( 9) 00:07:46.877 30449.034 - 30650.683: 99.9461% ( 9) 00:07:46.877 30650.683 - 30852.332: 100.0000% ( 7) 00:07:46.877 00:07:46.877 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:46.877 ============================================================================== 00:07:46.877 Range in us Cumulative IO count 00:07:46.877 5747.003 - 5772.209: 0.0077% ( 1) 00:07:46.877 5772.209 - 5797.415: 0.0231% ( 2) 00:07:46.877 5797.415 - 5822.622: 0.0693% ( 6) 00:07:46.877 5822.622 - 5847.828: 0.1001% ( 4) 00:07:46.877 5847.828 - 5873.034: 0.1232% ( 3) 00:07:46.877 5873.034 - 5898.240: 0.1616% ( 5) 00:07:46.877 5898.240 - 5923.446: 0.1847% ( 3) 00:07:46.877 5923.446 - 5948.652: 0.2155% ( 4) 00:07:46.877 5948.652 - 5973.858: 0.2694% ( 7) 00:07:46.877 5973.858 - 5999.065: 0.3464% ( 10) 00:07:46.877 5999.065 - 6024.271: 0.4387% ( 12) 00:07:46.877 6024.271 - 6049.477: 0.5003% ( 8) 00:07:46.877 6049.477 - 6074.683: 0.5850% ( 11) 00:07:46.877 6074.683 - 6099.889: 0.6619% ( 10) 00:07:46.877 6099.889 - 6125.095: 0.7543% ( 12) 00:07:46.877 6125.095 - 6150.302: 0.8698% ( 15) 00:07:46.877 6150.302 - 6175.508: 0.9544% ( 11) 00:07:46.877 6175.508 - 6200.714: 1.0468% ( 12) 00:07:46.877 6200.714 - 6225.920: 1.1238% ( 10) 00:07:46.877 6225.920 - 6251.126: 1.2315% ( 14) 00:07:46.877 6251.126 - 6276.332: 1.3393% ( 14) 00:07:46.877 6276.332 - 6301.538: 1.4932% ( 20) 00:07:46.877 6301.538 - 6326.745: 1.6087% ( 15) 00:07:46.877 6326.745 - 6351.951: 1.7703% ( 21) 00:07:46.877 6351.951 - 6377.157: 1.8858% ( 15) 00:07:46.877 6377.157 - 6402.363: 2.0936% ( 27) 00:07:46.877 6402.363 - 6427.569: 2.2475% ( 20) 00:07:46.877 6427.569 - 6452.775: 2.5169% ( 35) 00:07:46.877 6452.775 - 6503.188: 2.9172% ( 52) 00:07:46.877 6503.188 - 6553.600: 3.3713% ( 59) 00:07:46.877 6553.600 - 6604.012: 3.8100% ( 57) 00:07:46.877 6604.012 - 6654.425: 4.2796% ( 61) 00:07:46.877 6654.425 - 6704.837: 4.9184% ( 83) 00:07:46.877 6704.837 - 6755.249: 5.5111% ( 77) 00:07:46.877 6755.249 - 6805.662: 6.1038% ( 77) 00:07:46.877 6805.662 - 6856.074: 6.7657% ( 86) 00:07:46.877 6856.074 - 6906.486: 7.4815% ( 93) 00:07:46.877 6906.486 - 6956.898: 8.1435% ( 86) 00:07:46.877 6956.898 - 7007.311: 8.8593% ( 93) 00:07:46.877 7007.311 - 7057.723: 9.5751% ( 93) 00:07:46.877 7057.723 - 7108.135: 10.3140% ( 96) 00:07:46.877 7108.135 - 7158.548: 11.1838% ( 113) 00:07:46.877 7158.548 - 7208.960: 12.3768% ( 155) 00:07:46.877 7208.960 - 7259.372: 13.8624% ( 193) 00:07:46.877 7259.372 - 7309.785: 15.4403% ( 205) 00:07:46.877 7309.785 - 7360.197: 17.2029% ( 229) 00:07:46.877 7360.197 - 7410.609: 18.9501% ( 227) 00:07:46.878 7410.609 - 7461.022: 20.9129% ( 255) 00:07:46.878 7461.022 - 7511.434: 22.9526% ( 265) 00:07:46.878 7511.434 - 7561.846: 24.9230% ( 256) 00:07:46.878 7561.846 - 7612.258: 26.8935% ( 256) 00:07:46.878 7612.258 - 7662.671: 28.8562% ( 255) 00:07:46.878 7662.671 - 7713.083: 30.8959% ( 265) 00:07:46.878 7713.083 - 7763.495: 32.9510% ( 267) 00:07:46.878 7763.495 - 7813.908: 34.8137% ( 242) 00:07:46.878 7813.908 - 7864.320: 36.7842% ( 256) 00:07:46.878 7864.320 - 7914.732: 38.6546% ( 243) 00:07:46.878 7914.732 - 7965.145: 40.5172% ( 242) 00:07:46.878 7965.145 - 8015.557: 42.3645% ( 240) 00:07:46.878 8015.557 - 8065.969: 44.2041% ( 239) 00:07:46.878 8065.969 - 8116.382: 45.8975% ( 220) 00:07:46.878 8116.382 - 8166.794: 47.4600% ( 203) 00:07:46.878 8166.794 - 8217.206: 48.7839% ( 172) 00:07:46.878 8217.206 - 8267.618: 49.8768% ( 142) 00:07:46.878 8267.618 - 8318.031: 50.7620% ( 115) 00:07:46.878 8318.031 - 8368.443: 51.5471% ( 102) 00:07:46.878 8368.443 - 8418.855: 52.1706% ( 81) 00:07:46.878 8418.855 - 8469.268: 52.5939% ( 55) 00:07:46.878 8469.268 - 8519.680: 52.9788% ( 50) 00:07:46.878 8519.680 - 8570.092: 53.2866% ( 40) 00:07:46.878 8570.092 - 8620.505: 53.6022% ( 41) 00:07:46.878 8620.505 - 8670.917: 53.9178% ( 41) 00:07:46.878 8670.917 - 8721.329: 54.2257% ( 40) 00:07:46.878 8721.329 - 8771.742: 54.4797% ( 33) 00:07:46.878 8771.742 - 8822.154: 54.7414% ( 34) 00:07:46.878 8822.154 - 8872.566: 54.9569% ( 28) 00:07:46.878 8872.566 - 8922.978: 55.1878% ( 30) 00:07:46.878 8922.978 - 8973.391: 55.3494% ( 21) 00:07:46.878 8973.391 - 9023.803: 55.5342% ( 24) 00:07:46.878 9023.803 - 9074.215: 55.7189% ( 24) 00:07:46.878 9074.215 - 9124.628: 55.9190% ( 26) 00:07:46.878 9124.628 - 9175.040: 56.1730% ( 33) 00:07:46.878 9175.040 - 9225.452: 56.4039% ( 30) 00:07:46.878 9225.452 - 9275.865: 56.6810% ( 36) 00:07:46.878 9275.865 - 9326.277: 56.9504% ( 35) 00:07:46.878 9326.277 - 9376.689: 57.1275% ( 23) 00:07:46.878 9376.689 - 9427.102: 57.4046% ( 36) 00:07:46.878 9427.102 - 9477.514: 57.5739% ( 22) 00:07:46.878 9477.514 - 9527.926: 57.8048% ( 30) 00:07:46.878 9527.926 - 9578.338: 58.0357% ( 30) 00:07:46.878 9578.338 - 9628.751: 58.2435% ( 27) 00:07:46.878 9628.751 - 9679.163: 58.3975% ( 20) 00:07:46.878 9679.163 - 9729.575: 58.5283% ( 17) 00:07:46.878 9729.575 - 9779.988: 58.6746% ( 19) 00:07:46.878 9779.988 - 9830.400: 58.8208% ( 19) 00:07:46.878 9830.400 - 9880.812: 58.9748% ( 20) 00:07:46.878 9880.812 - 9931.225: 59.1441% ( 22) 00:07:46.878 9931.225 - 9981.637: 59.2749% ( 17) 00:07:46.878 9981.637 - 10032.049: 59.4597% ( 24) 00:07:46.878 10032.049 - 10082.462: 59.6213% ( 21) 00:07:46.878 10082.462 - 10132.874: 59.8291% ( 27) 00:07:46.878 10132.874 - 10183.286: 60.0062% ( 23) 00:07:46.878 10183.286 - 10233.698: 60.1678% ( 21) 00:07:46.878 10233.698 - 10284.111: 60.3525% ( 24) 00:07:46.878 10284.111 - 10334.523: 60.5911% ( 31) 00:07:46.878 10334.523 - 10384.935: 60.7913% ( 26) 00:07:46.878 10384.935 - 10435.348: 60.9683% ( 23) 00:07:46.878 10435.348 - 10485.760: 61.0837% ( 15) 00:07:46.878 10485.760 - 10536.172: 61.3070% ( 29) 00:07:46.878 10536.172 - 10586.585: 61.5302% ( 29) 00:07:46.878 10586.585 - 10636.997: 61.7534% ( 29) 00:07:46.878 10636.997 - 10687.409: 62.0151% ( 34) 00:07:46.878 10687.409 - 10737.822: 62.3230% ( 40) 00:07:46.878 10737.822 - 10788.234: 62.7078% ( 50) 00:07:46.878 10788.234 - 10838.646: 63.1542% ( 58) 00:07:46.878 10838.646 - 10889.058: 63.5776% ( 55) 00:07:46.878 10889.058 - 10939.471: 63.9470% ( 48) 00:07:46.878 10939.471 - 10989.883: 64.3858% ( 57) 00:07:46.878 10989.883 - 11040.295: 64.7552% ( 48) 00:07:46.878 11040.295 - 11090.708: 65.2709% ( 67) 00:07:46.878 11090.708 - 11141.120: 65.6481% ( 49) 00:07:46.878 11141.120 - 11191.532: 66.0099% ( 47) 00:07:46.878 11191.532 - 11241.945: 66.4332% ( 55) 00:07:46.878 11241.945 - 11292.357: 66.9104% ( 62) 00:07:46.878 11292.357 - 11342.769: 67.4492% ( 70) 00:07:46.878 11342.769 - 11393.182: 68.0650% ( 80) 00:07:46.878 11393.182 - 11443.594: 68.7115% ( 84) 00:07:46.878 11443.594 - 11494.006: 69.3966% ( 89) 00:07:46.878 11494.006 - 11544.418: 69.9430% ( 71) 00:07:46.878 11544.418 - 11594.831: 70.5126% ( 74) 00:07:46.878 11594.831 - 11645.243: 71.0822% ( 74) 00:07:46.878 11645.243 - 11695.655: 71.6287% ( 71) 00:07:46.878 11695.655 - 11746.068: 72.2214% ( 77) 00:07:46.878 11746.068 - 11796.480: 72.8525% ( 82) 00:07:46.878 11796.480 - 11846.892: 73.3605% ( 66) 00:07:46.878 11846.892 - 11897.305: 73.8762% ( 67) 00:07:46.878 11897.305 - 11947.717: 74.3919% ( 67) 00:07:46.878 11947.717 - 11998.129: 74.9461% ( 72) 00:07:46.878 11998.129 - 12048.542: 75.3310% ( 50) 00:07:46.878 12048.542 - 12098.954: 75.8390% ( 66) 00:07:46.878 12098.954 - 12149.366: 76.3162% ( 62) 00:07:46.878 12149.366 - 12199.778: 76.8011% ( 63) 00:07:46.878 12199.778 - 12250.191: 77.2783% ( 62) 00:07:46.878 12250.191 - 12300.603: 77.9172% ( 83) 00:07:46.878 12300.603 - 12351.015: 78.4560% ( 70) 00:07:46.878 12351.015 - 12401.428: 78.7869% ( 43) 00:07:46.878 12401.428 - 12451.840: 79.2180% ( 56) 00:07:46.878 12451.840 - 12502.252: 79.7183% ( 65) 00:07:46.878 12502.252 - 12552.665: 80.0031% ( 37) 00:07:46.878 12552.665 - 12603.077: 80.3417% ( 44) 00:07:46.878 12603.077 - 12653.489: 80.8959% ( 72) 00:07:46.878 12653.489 - 12703.902: 81.2269% ( 43) 00:07:46.878 12703.902 - 12754.314: 81.7811% ( 72) 00:07:46.878 12754.314 - 12804.726: 82.1352% ( 46) 00:07:46.878 12804.726 - 12855.138: 82.5508% ( 54) 00:07:46.878 12855.138 - 12905.551: 83.0434% ( 64) 00:07:46.878 12905.551 - 13006.375: 83.9825% ( 122) 00:07:46.878 13006.375 - 13107.200: 84.8830% ( 117) 00:07:46.878 13107.200 - 13208.025: 85.8297% ( 123) 00:07:46.878 13208.025 - 13308.849: 86.6379% ( 105) 00:07:46.878 13308.849 - 13409.674: 87.2768% ( 83) 00:07:46.878 13409.674 - 13510.498: 88.1004% ( 107) 00:07:46.878 13510.498 - 13611.323: 88.7777% ( 88) 00:07:46.878 13611.323 - 13712.148: 89.5089% ( 95) 00:07:46.878 13712.148 - 13812.972: 90.0554% ( 71) 00:07:46.878 13812.972 - 13913.797: 90.6558% ( 78) 00:07:46.878 13913.797 - 14014.622: 91.1407% ( 63) 00:07:46.878 14014.622 - 14115.446: 91.6179% ( 62) 00:07:46.878 14115.446 - 14216.271: 92.0259% ( 53) 00:07:46.878 14216.271 - 14317.095: 92.5339% ( 66) 00:07:46.878 14317.095 - 14417.920: 92.9495% ( 54) 00:07:46.878 14417.920 - 14518.745: 93.2497% ( 39) 00:07:46.878 14518.745 - 14619.569: 93.5730% ( 42) 00:07:46.878 14619.569 - 14720.394: 93.9886% ( 54) 00:07:46.878 14720.394 - 14821.218: 94.5582% ( 74) 00:07:46.878 14821.218 - 14922.043: 94.9200% ( 47) 00:07:46.878 14922.043 - 15022.868: 95.2278% ( 40) 00:07:46.878 15022.868 - 15123.692: 95.5434% ( 41) 00:07:46.878 15123.692 - 15224.517: 95.8513% ( 40) 00:07:46.878 15224.517 - 15325.342: 96.0283% ( 23) 00:07:46.878 15325.342 - 15426.166: 96.3516% ( 42) 00:07:46.878 15426.166 - 15526.991: 96.5440% ( 25) 00:07:46.878 15526.991 - 15627.815: 96.7595% ( 28) 00:07:46.878 15627.815 - 15728.640: 96.9289% ( 22) 00:07:46.878 15728.640 - 15829.465: 97.1444% ( 28) 00:07:46.878 15829.465 - 15930.289: 97.2291% ( 11) 00:07:46.878 15930.289 - 16031.114: 97.3445% ( 15) 00:07:46.878 16031.114 - 16131.938: 97.4369% ( 12) 00:07:46.878 16131.938 - 16232.763: 97.5908% ( 20) 00:07:46.878 16232.763 - 16333.588: 97.6524% ( 8) 00:07:46.878 16333.588 - 16434.412: 97.6832% ( 4) 00:07:46.878 16434.412 - 16535.237: 97.7909% ( 14) 00:07:46.878 16535.237 - 16636.062: 97.8294% ( 5) 00:07:46.878 16636.062 - 16736.886: 97.8833% ( 7) 00:07:46.878 16736.886 - 16837.711: 97.8987% ( 2) 00:07:46.878 16837.711 - 16938.535: 97.9218% ( 3) 00:07:46.878 16938.535 - 17039.360: 97.9680% ( 6) 00:07:46.878 17039.360 - 17140.185: 98.0603% ( 12) 00:07:46.878 17140.185 - 17241.009: 98.1450% ( 11) 00:07:46.878 17241.009 - 17341.834: 98.1681% ( 3) 00:07:46.878 17341.834 - 17442.658: 98.2143% ( 6) 00:07:46.878 17442.658 - 17543.483: 98.2605% ( 6) 00:07:46.878 17543.483 - 17644.308: 98.2913% ( 4) 00:07:46.878 17644.308 - 17745.132: 98.3220% ( 4) 00:07:46.878 17745.132 - 17845.957: 98.3759% ( 7) 00:07:46.878 17845.957 - 17946.782: 98.4144% ( 5) 00:07:46.878 17946.782 - 18047.606: 98.4452% ( 4) 00:07:46.878 18047.606 - 18148.431: 98.5299% ( 11) 00:07:46.878 18148.431 - 18249.255: 98.5991% ( 9) 00:07:46.878 18249.255 - 18350.080: 98.6530% ( 7) 00:07:46.878 18350.080 - 18450.905: 98.6761% ( 3) 00:07:46.878 18450.905 - 18551.729: 98.6992% ( 3) 00:07:46.878 18551.729 - 18652.554: 98.7377% ( 5) 00:07:46.878 18652.554 - 18753.378: 98.7685% ( 4) 00:07:46.878 18753.378 - 18854.203: 98.8070% ( 5) 00:07:46.878 18854.203 - 18955.028: 98.8300% ( 3) 00:07:46.878 18955.028 - 19055.852: 98.8762% ( 6) 00:07:46.878 19055.852 - 19156.677: 98.9224% ( 6) 00:07:46.878 19156.677 - 19257.502: 98.9609% ( 5) 00:07:46.879 19257.502 - 19358.326: 99.0148% ( 7) 00:07:46.879 22786.363 - 22887.188: 99.0302% ( 2) 00:07:46.879 22887.188 - 22988.012: 99.0687% ( 5) 00:07:46.879 22988.012 - 23088.837: 99.0994% ( 4) 00:07:46.879 23088.837 - 23189.662: 99.1225% ( 3) 00:07:46.879 23189.662 - 23290.486: 99.1610% ( 5) 00:07:46.879 23290.486 - 23391.311: 99.1841% ( 3) 00:07:46.879 23391.311 - 23492.135: 99.2226% ( 5) 00:07:46.879 23492.135 - 23592.960: 99.2457% ( 3) 00:07:46.879 23592.960 - 23693.785: 99.2842% ( 5) 00:07:46.879 23693.785 - 23794.609: 99.3073% ( 3) 00:07:46.879 23794.609 - 23895.434: 99.3381% ( 4) 00:07:46.879 23895.434 - 23996.258: 99.3688% ( 4) 00:07:46.879 23996.258 - 24097.083: 99.3919% ( 3) 00:07:46.879 24097.083 - 24197.908: 99.4227% ( 4) 00:07:46.879 24197.908 - 24298.732: 99.4535% ( 4) 00:07:46.879 24298.732 - 24399.557: 99.4843% ( 4) 00:07:46.879 24399.557 - 24500.382: 99.5074% ( 3) 00:07:46.879 27625.945 - 27827.594: 99.5228% ( 2) 00:07:46.879 27827.594 - 28029.243: 99.5844% ( 8) 00:07:46.879 28029.243 - 28230.892: 99.6382% ( 7) 00:07:46.879 28230.892 - 28432.542: 99.6998% ( 8) 00:07:46.879 28432.542 - 28634.191: 99.7614% ( 8) 00:07:46.879 28634.191 - 28835.840: 99.8153% ( 7) 00:07:46.879 28835.840 - 29037.489: 99.8845% ( 9) 00:07:46.879 29037.489 - 29239.138: 99.9461% ( 8) 00:07:46.879 29239.138 - 29440.788: 100.0000% ( 7) 00:07:46.879 00:07:46.879 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:46.879 ============================================================================== 00:07:46.879 Range in us Cumulative IO count 00:07:46.879 5898.240 - 5923.446: 0.0231% ( 3) 00:07:46.879 5923.446 - 5948.652: 0.0924% ( 9) 00:07:46.879 5948.652 - 5973.858: 0.1308% ( 5) 00:07:46.879 5973.858 - 5999.065: 0.2001% ( 9) 00:07:46.879 5999.065 - 6024.271: 0.2463% ( 6) 00:07:46.879 6024.271 - 6049.477: 0.2694% ( 3) 00:07:46.879 6049.477 - 6074.683: 0.2925% ( 3) 00:07:46.879 6074.683 - 6099.889: 0.3464% ( 7) 00:07:46.879 6099.889 - 6125.095: 0.3849% ( 5) 00:07:46.879 6125.095 - 6150.302: 0.4464% ( 8) 00:07:46.879 6150.302 - 6175.508: 0.5619% ( 15) 00:07:46.879 6175.508 - 6200.714: 0.6696% ( 14) 00:07:46.879 6200.714 - 6225.920: 0.7466% ( 10) 00:07:46.879 6225.920 - 6251.126: 0.8313% ( 11) 00:07:46.879 6251.126 - 6276.332: 0.9236% ( 12) 00:07:46.879 6276.332 - 6301.538: 1.0314% ( 14) 00:07:46.879 6301.538 - 6326.745: 1.1776% ( 19) 00:07:46.879 6326.745 - 6351.951: 1.3316% ( 20) 00:07:46.879 6351.951 - 6377.157: 1.5317% ( 26) 00:07:46.879 6377.157 - 6402.363: 1.7087% ( 23) 00:07:46.879 6402.363 - 6427.569: 1.8781% ( 22) 00:07:46.879 6427.569 - 6452.775: 2.0936% ( 28) 00:07:46.879 6452.775 - 6503.188: 2.5092% ( 54) 00:07:46.879 6503.188 - 6553.600: 2.9634% ( 59) 00:07:46.879 6553.600 - 6604.012: 3.4714% ( 66) 00:07:46.879 6604.012 - 6654.425: 4.1333% ( 86) 00:07:46.879 6654.425 - 6704.837: 4.7260% ( 77) 00:07:46.879 6704.837 - 6755.249: 5.3264% ( 78) 00:07:46.879 6755.249 - 6805.662: 5.9652% ( 83) 00:07:46.879 6805.662 - 6856.074: 6.6195% ( 85) 00:07:46.879 6856.074 - 6906.486: 7.3276% ( 92) 00:07:46.879 6906.486 - 6956.898: 8.0665% ( 96) 00:07:46.879 6956.898 - 7007.311: 8.8285% ( 99) 00:07:46.879 7007.311 - 7057.723: 9.5751% ( 97) 00:07:46.879 7057.723 - 7108.135: 10.3602% ( 102) 00:07:46.879 7108.135 - 7158.548: 11.1761% ( 106) 00:07:46.879 7158.548 - 7208.960: 11.9535% ( 101) 00:07:46.879 7208.960 - 7259.372: 12.8925% ( 122) 00:07:46.879 7259.372 - 7309.785: 14.1318% ( 161) 00:07:46.879 7309.785 - 7360.197: 15.8405% ( 222) 00:07:46.879 7360.197 - 7410.609: 17.5647% ( 224) 00:07:46.879 7410.609 - 7461.022: 19.5351% ( 256) 00:07:46.879 7461.022 - 7511.434: 21.4748% ( 252) 00:07:46.879 7511.434 - 7561.846: 23.5683% ( 272) 00:07:46.879 7561.846 - 7612.258: 25.7774% ( 287) 00:07:46.879 7612.258 - 7662.671: 27.8633% ( 271) 00:07:46.879 7662.671 - 7713.083: 30.0108% ( 279) 00:07:46.879 7713.083 - 7763.495: 32.2506% ( 291) 00:07:46.879 7763.495 - 7813.908: 34.5135% ( 294) 00:07:46.879 7813.908 - 7864.320: 36.6918% ( 283) 00:07:46.879 7864.320 - 7914.732: 38.9778% ( 297) 00:07:46.879 7914.732 - 7965.145: 41.1561% ( 283) 00:07:46.879 7965.145 - 8015.557: 43.3882% ( 290) 00:07:46.879 8015.557 - 8065.969: 45.4433% ( 267) 00:07:46.879 8065.969 - 8116.382: 47.2060% ( 229) 00:07:46.879 8116.382 - 8166.794: 48.6838% ( 192) 00:07:46.879 8166.794 - 8217.206: 49.9076% ( 159) 00:07:46.879 8217.206 - 8267.618: 50.9083% ( 130) 00:07:46.879 8267.618 - 8318.031: 51.6472% ( 96) 00:07:46.879 8318.031 - 8368.443: 52.2475% ( 78) 00:07:46.879 8368.443 - 8418.855: 52.7478% ( 65) 00:07:46.879 8418.855 - 8469.268: 53.1866% ( 57) 00:07:46.879 8469.268 - 8519.680: 53.5560% ( 48) 00:07:46.879 8519.680 - 8570.092: 53.9409% ( 50) 00:07:46.879 8570.092 - 8620.505: 54.2796% ( 44) 00:07:46.879 8620.505 - 8670.917: 54.5336% ( 33) 00:07:46.879 8670.917 - 8721.329: 54.6952% ( 21) 00:07:46.879 8721.329 - 8771.742: 54.9030% ( 27) 00:07:46.879 8771.742 - 8822.154: 55.1185% ( 28) 00:07:46.879 8822.154 - 8872.566: 55.3187% ( 26) 00:07:46.879 8872.566 - 8922.978: 55.5265% ( 27) 00:07:46.879 8922.978 - 8973.391: 55.7035% ( 23) 00:07:46.879 8973.391 - 9023.803: 55.9344% ( 30) 00:07:46.879 9023.803 - 9074.215: 56.1422% ( 27) 00:07:46.879 9074.215 - 9124.628: 56.4116% ( 35) 00:07:46.879 9124.628 - 9175.040: 56.6579% ( 32) 00:07:46.879 9175.040 - 9225.452: 56.8966% ( 31) 00:07:46.879 9225.452 - 9275.865: 57.1044% ( 27) 00:07:46.879 9275.865 - 9326.277: 57.3353% ( 30) 00:07:46.879 9326.277 - 9376.689: 57.5662% ( 30) 00:07:46.879 9376.689 - 9427.102: 57.7817% ( 28) 00:07:46.879 9427.102 - 9477.514: 57.9664% ( 24) 00:07:46.879 9477.514 - 9527.926: 58.1435% ( 23) 00:07:46.879 9527.926 - 9578.338: 58.2897% ( 19) 00:07:46.879 9578.338 - 9628.751: 58.4129% ( 16) 00:07:46.879 9628.751 - 9679.163: 58.5591% ( 19) 00:07:46.879 9679.163 - 9729.575: 58.6900% ( 17) 00:07:46.879 9729.575 - 9779.988: 58.8439% ( 20) 00:07:46.879 9779.988 - 9830.400: 59.0517% ( 27) 00:07:46.879 9830.400 - 9880.812: 59.2211% ( 22) 00:07:46.879 9880.812 - 9931.225: 59.4366% ( 28) 00:07:46.879 9931.225 - 9981.637: 59.6675% ( 30) 00:07:46.879 9981.637 - 10032.049: 59.8676% ( 26) 00:07:46.879 10032.049 - 10082.462: 60.0908% ( 29) 00:07:46.879 10082.462 - 10132.874: 60.2986% ( 27) 00:07:46.879 10132.874 - 10183.286: 60.4834% ( 24) 00:07:46.879 10183.286 - 10233.698: 60.6912% ( 27) 00:07:46.879 10233.698 - 10284.111: 60.8682% ( 23) 00:07:46.879 10284.111 - 10334.523: 61.0607% ( 25) 00:07:46.879 10334.523 - 10384.935: 61.2300% ( 22) 00:07:46.879 10384.935 - 10435.348: 61.4994% ( 35) 00:07:46.879 10435.348 - 10485.760: 61.7611% ( 34) 00:07:46.879 10485.760 - 10536.172: 61.9997% ( 31) 00:07:46.879 10536.172 - 10586.585: 62.2306% ( 30) 00:07:46.879 10586.585 - 10636.997: 62.4769% ( 32) 00:07:46.879 10636.997 - 10687.409: 62.6924% ( 28) 00:07:46.879 10687.409 - 10737.822: 62.9002% ( 27) 00:07:46.879 10737.822 - 10788.234: 63.0696% ( 22) 00:07:46.879 10788.234 - 10838.646: 63.2774% ( 27) 00:07:46.879 10838.646 - 10889.058: 63.4852% ( 27) 00:07:46.879 10889.058 - 10939.471: 63.7546% ( 35) 00:07:46.879 10939.471 - 10989.883: 64.0779% ( 42) 00:07:46.879 10989.883 - 11040.295: 64.4012% ( 42) 00:07:46.879 11040.295 - 11090.708: 64.7783% ( 49) 00:07:46.879 11090.708 - 11141.120: 65.1786% ( 52) 00:07:46.879 11141.120 - 11191.532: 65.6558% ( 62) 00:07:46.879 11191.532 - 11241.945: 66.1330% ( 62) 00:07:46.879 11241.945 - 11292.357: 66.6333% ( 65) 00:07:46.879 11292.357 - 11342.769: 67.1798% ( 71) 00:07:46.879 11342.769 - 11393.182: 67.6570% ( 62) 00:07:46.879 11393.182 - 11443.594: 68.1496% ( 64) 00:07:46.879 11443.594 - 11494.006: 68.6268% ( 62) 00:07:46.879 11494.006 - 11544.418: 69.2195% ( 77) 00:07:46.879 11544.418 - 11594.831: 69.7660% ( 71) 00:07:46.879 11594.831 - 11645.243: 70.3125% ( 71) 00:07:46.879 11645.243 - 11695.655: 70.9052% ( 77) 00:07:46.879 11695.655 - 11746.068: 71.4286% ( 68) 00:07:46.879 11746.068 - 11796.480: 71.9905% ( 73) 00:07:46.879 11796.480 - 11846.892: 72.5523% ( 73) 00:07:46.879 11846.892 - 11897.305: 73.0680% ( 67) 00:07:46.879 11897.305 - 11947.717: 73.5607% ( 64) 00:07:46.879 11947.717 - 11998.129: 74.0302% ( 61) 00:07:46.879 11998.129 - 12048.542: 74.5305% ( 65) 00:07:46.879 12048.542 - 12098.954: 74.9923% ( 60) 00:07:46.879 12098.954 - 12149.366: 75.4310% ( 57) 00:07:46.879 12149.366 - 12199.778: 75.9006% ( 61) 00:07:46.879 12199.778 - 12250.191: 76.2854% ( 50) 00:07:46.879 12250.191 - 12300.603: 76.7780% ( 64) 00:07:46.879 12300.603 - 12351.015: 77.2860% ( 66) 00:07:46.879 12351.015 - 12401.428: 77.8402% ( 72) 00:07:46.879 12401.428 - 12451.840: 78.2789% ( 57) 00:07:46.880 12451.840 - 12502.252: 78.7869% ( 66) 00:07:46.880 12502.252 - 12552.665: 79.2565% ( 61) 00:07:46.880 12552.665 - 12603.077: 79.8183% ( 73) 00:07:46.880 12603.077 - 12653.489: 80.3802% ( 73) 00:07:46.880 12653.489 - 12703.902: 80.9036% ( 68) 00:07:46.880 12703.902 - 12754.314: 81.4578% ( 72) 00:07:46.880 12754.314 - 12804.726: 82.0736% ( 80) 00:07:46.880 12804.726 - 12855.138: 82.5970% ( 68) 00:07:46.880 12855.138 - 12905.551: 83.2435% ( 84) 00:07:46.880 12905.551 - 13006.375: 84.3673% ( 146) 00:07:46.880 13006.375 - 13107.200: 85.4526% ( 141) 00:07:46.880 13107.200 - 13208.025: 86.3839% ( 121) 00:07:46.880 13208.025 - 13308.849: 87.2614% ( 114) 00:07:46.880 13308.849 - 13409.674: 88.0465% ( 102) 00:07:46.880 13409.674 - 13510.498: 88.6853% ( 83) 00:07:46.880 13510.498 - 13611.323: 89.1395% ( 59) 00:07:46.880 13611.323 - 13712.148: 89.5859% ( 58) 00:07:46.880 13712.148 - 13812.972: 90.0939% ( 66) 00:07:46.880 13812.972 - 13913.797: 90.6404% ( 71) 00:07:46.880 13913.797 - 14014.622: 91.0945% ( 59) 00:07:46.880 14014.622 - 14115.446: 91.5333% ( 57) 00:07:46.880 14115.446 - 14216.271: 91.9566% ( 55) 00:07:46.880 14216.271 - 14317.095: 92.3722% ( 54) 00:07:46.880 14317.095 - 14417.920: 92.7879% ( 54) 00:07:46.880 14417.920 - 14518.745: 93.1958% ( 53) 00:07:46.880 14518.745 - 14619.569: 93.5807% ( 50) 00:07:46.880 14619.569 - 14720.394: 93.9732% ( 51) 00:07:46.880 14720.394 - 14821.218: 94.3735% ( 52) 00:07:46.880 14821.218 - 14922.043: 94.7198% ( 45) 00:07:46.880 14922.043 - 15022.868: 95.0508% ( 43) 00:07:46.880 15022.868 - 15123.692: 95.3279% ( 36) 00:07:46.880 15123.692 - 15224.517: 95.5973% ( 35) 00:07:46.880 15224.517 - 15325.342: 95.8128% ( 28) 00:07:46.880 15325.342 - 15426.166: 95.9437% ( 17) 00:07:46.880 15426.166 - 15526.991: 96.1207% ( 23) 00:07:46.880 15526.991 - 15627.815: 96.4055% ( 37) 00:07:46.880 15627.815 - 15728.640: 96.6210% ( 28) 00:07:46.880 15728.640 - 15829.465: 96.8673% ( 32) 00:07:46.880 15829.465 - 15930.289: 97.0674% ( 26) 00:07:46.880 15930.289 - 16031.114: 97.2675% ( 26) 00:07:46.880 16031.114 - 16131.938: 97.4600% ( 25) 00:07:46.880 16131.938 - 16232.763: 97.5908% ( 17) 00:07:46.880 16232.763 - 16333.588: 97.7371% ( 19) 00:07:46.880 16333.588 - 16434.412: 97.8140% ( 10) 00:07:46.880 16434.412 - 16535.237: 97.8756% ( 8) 00:07:46.880 16535.237 - 16636.062: 97.9064% ( 4) 00:07:46.880 16636.062 - 16736.886: 97.9449% ( 5) 00:07:46.880 16736.886 - 16837.711: 97.9757% ( 4) 00:07:46.880 16837.711 - 16938.535: 98.0065% ( 4) 00:07:46.880 16938.535 - 17039.360: 98.0296% ( 3) 00:07:46.880 17341.834 - 17442.658: 98.0450% ( 2) 00:07:46.880 17442.658 - 17543.483: 98.1681% ( 16) 00:07:46.880 17543.483 - 17644.308: 98.2682% ( 13) 00:07:46.880 17644.308 - 17745.132: 98.3605% ( 12) 00:07:46.880 17745.132 - 17845.957: 98.4606% ( 13) 00:07:46.880 17845.957 - 17946.782: 98.5607% ( 13) 00:07:46.880 17946.782 - 18047.606: 98.6607% ( 13) 00:07:46.880 18047.606 - 18148.431: 98.7531% ( 12) 00:07:46.880 18148.431 - 18249.255: 98.8454% ( 12) 00:07:46.880 18249.255 - 18350.080: 98.9455% ( 13) 00:07:46.880 18350.080 - 18450.905: 99.0148% ( 9) 00:07:46.880 21072.345 - 21173.169: 99.0302% ( 2) 00:07:46.880 21173.169 - 21273.994: 99.0610% ( 4) 00:07:46.880 21273.994 - 21374.818: 99.0841% ( 3) 00:07:46.880 21374.818 - 21475.643: 99.1225% ( 5) 00:07:46.880 21475.643 - 21576.468: 99.1533% ( 4) 00:07:46.880 21576.468 - 21677.292: 99.1841% ( 4) 00:07:46.880 21677.292 - 21778.117: 99.2149% ( 4) 00:07:46.880 21778.117 - 21878.942: 99.2457% ( 4) 00:07:46.880 21878.942 - 21979.766: 99.2842% ( 5) 00:07:46.880 21979.766 - 22080.591: 99.3150% ( 4) 00:07:46.880 22080.591 - 22181.415: 99.3458% ( 4) 00:07:46.880 22181.415 - 22282.240: 99.3842% ( 5) 00:07:46.880 22282.240 - 22383.065: 99.4150% ( 4) 00:07:46.880 22383.065 - 22483.889: 99.4458% ( 4) 00:07:46.880 22483.889 - 22584.714: 99.4766% ( 4) 00:07:46.880 22584.714 - 22685.538: 99.5074% ( 4) 00:07:46.880 26012.751 - 26214.400: 99.5459% ( 5) 00:07:46.880 26214.400 - 26416.049: 99.6075% ( 8) 00:07:46.880 26416.049 - 26617.698: 99.6690% ( 8) 00:07:46.880 26617.698 - 26819.348: 99.7306% ( 8) 00:07:46.880 26819.348 - 27020.997: 99.7922% ( 8) 00:07:46.880 27020.997 - 27222.646: 99.8615% ( 9) 00:07:46.880 27222.646 - 27424.295: 99.9230% ( 8) 00:07:46.880 27424.295 - 27625.945: 99.9923% ( 9) 00:07:46.880 27625.945 - 27827.594: 100.0000% ( 1) 00:07:46.880 00:07:46.880 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:46.880 ============================================================================== 00:07:46.880 Range in us Cumulative IO count 00:07:46.880 5898.240 - 5923.446: 0.0385% ( 5) 00:07:46.880 5923.446 - 5948.652: 0.0770% ( 5) 00:07:46.880 5948.652 - 5973.858: 0.0924% ( 2) 00:07:46.880 5973.858 - 5999.065: 0.1001% ( 1) 00:07:46.880 5999.065 - 6024.271: 0.1539% ( 7) 00:07:46.880 6024.271 - 6049.477: 0.2540% ( 13) 00:07:46.880 6049.477 - 6074.683: 0.3156% ( 8) 00:07:46.880 6074.683 - 6099.889: 0.3618% ( 6) 00:07:46.880 6099.889 - 6125.095: 0.4002% ( 5) 00:07:46.880 6125.095 - 6150.302: 0.4541% ( 7) 00:07:46.880 6150.302 - 6175.508: 0.5157% ( 8) 00:07:46.880 6175.508 - 6200.714: 0.6158% ( 13) 00:07:46.880 6200.714 - 6225.920: 0.7312% ( 15) 00:07:46.880 6225.920 - 6251.126: 0.8236% ( 12) 00:07:46.880 6251.126 - 6276.332: 1.0006% ( 23) 00:07:46.880 6276.332 - 6301.538: 1.1161% ( 15) 00:07:46.880 6301.538 - 6326.745: 1.2623% ( 19) 00:07:46.880 6326.745 - 6351.951: 1.3932% ( 17) 00:07:46.880 6351.951 - 6377.157: 1.5240% ( 17) 00:07:46.880 6377.157 - 6402.363: 1.6472% ( 16) 00:07:46.880 6402.363 - 6427.569: 1.8088% ( 21) 00:07:46.880 6427.569 - 6452.775: 1.9627% ( 20) 00:07:46.880 6452.775 - 6503.188: 2.3014% ( 44) 00:07:46.880 6503.188 - 6553.600: 2.7632% ( 60) 00:07:46.880 6553.600 - 6604.012: 3.2482% ( 63) 00:07:46.880 6604.012 - 6654.425: 3.7485% ( 65) 00:07:46.880 6654.425 - 6704.837: 4.3411% ( 77) 00:07:46.880 6704.837 - 6755.249: 4.9492% ( 79) 00:07:46.880 6755.249 - 6805.662: 5.5034% ( 72) 00:07:46.880 6805.662 - 6856.074: 6.1038% ( 78) 00:07:46.880 6856.074 - 6906.486: 6.6810% ( 75) 00:07:46.880 6906.486 - 6956.898: 7.4276% ( 97) 00:07:46.880 6956.898 - 7007.311: 8.0588% ( 82) 00:07:46.880 7007.311 - 7057.723: 8.8054% ( 97) 00:07:46.880 7057.723 - 7108.135: 9.6444% ( 109) 00:07:46.880 7108.135 - 7158.548: 10.4449% ( 104) 00:07:46.880 7158.548 - 7208.960: 11.3839% ( 122) 00:07:46.880 7208.960 - 7259.372: 12.5308% ( 149) 00:07:46.880 7259.372 - 7309.785: 13.9470% ( 184) 00:07:46.880 7309.785 - 7360.197: 15.6943% ( 227) 00:07:46.880 7360.197 - 7410.609: 17.6955% ( 260) 00:07:46.880 7410.609 - 7461.022: 19.6352% ( 252) 00:07:46.880 7461.022 - 7511.434: 21.5748% ( 252) 00:07:46.880 7511.434 - 7561.846: 23.7146% ( 278) 00:07:46.880 7561.846 - 7612.258: 25.9698% ( 293) 00:07:46.880 7612.258 - 7662.671: 28.1635% ( 285) 00:07:46.880 7662.671 - 7713.083: 30.4572% ( 298) 00:07:46.880 7713.083 - 7763.495: 32.6586% ( 286) 00:07:46.880 7763.495 - 7813.908: 34.9138% ( 293) 00:07:46.880 7813.908 - 7864.320: 37.1459% ( 290) 00:07:46.880 7864.320 - 7914.732: 39.3704% ( 289) 00:07:46.881 7914.732 - 7965.145: 41.5179% ( 279) 00:07:46.881 7965.145 - 8015.557: 43.5807% ( 268) 00:07:46.881 8015.557 - 8065.969: 45.3741% ( 233) 00:07:46.881 8065.969 - 8116.382: 46.9982% ( 211) 00:07:46.881 8116.382 - 8166.794: 48.3990% ( 182) 00:07:46.881 8166.794 - 8217.206: 49.4612% ( 138) 00:07:46.881 8217.206 - 8267.618: 50.2694% ( 105) 00:07:46.881 8267.618 - 8318.031: 50.9775% ( 92) 00:07:46.881 8318.031 - 8368.443: 51.5779% ( 78) 00:07:46.881 8368.443 - 8418.855: 52.1013% ( 68) 00:07:46.881 8418.855 - 8469.268: 52.5169% ( 54) 00:07:46.881 8469.268 - 8519.680: 52.8787% ( 47) 00:07:46.881 8519.680 - 8570.092: 53.2020% ( 42) 00:07:46.881 8570.092 - 8620.505: 53.4637% ( 34) 00:07:46.881 8620.505 - 8670.917: 53.6792% ( 28) 00:07:46.881 8670.917 - 8721.329: 53.9024% ( 29) 00:07:46.881 8721.329 - 8771.742: 54.0948% ( 25) 00:07:46.881 8771.742 - 8822.154: 54.2873% ( 25) 00:07:46.881 8822.154 - 8872.566: 54.4874% ( 26) 00:07:46.881 8872.566 - 8922.978: 54.6721% ( 24) 00:07:46.881 8922.978 - 8973.391: 54.8722% ( 26) 00:07:46.881 8973.391 - 9023.803: 55.0339% ( 21) 00:07:46.881 9023.803 - 9074.215: 55.2109% ( 23) 00:07:46.881 9074.215 - 9124.628: 55.3802% ( 22) 00:07:46.881 9124.628 - 9175.040: 55.5496% ( 22) 00:07:46.881 9175.040 - 9225.452: 55.7343% ( 24) 00:07:46.881 9225.452 - 9275.865: 56.0037% ( 35) 00:07:46.881 9275.865 - 9326.277: 56.1576% ( 20) 00:07:46.881 9326.277 - 9376.689: 56.3808% ( 29) 00:07:46.881 9376.689 - 9427.102: 56.6272% ( 32) 00:07:46.881 9427.102 - 9477.514: 56.9119% ( 37) 00:07:46.881 9477.514 - 9527.926: 57.1967% ( 37) 00:07:46.881 9527.926 - 9578.338: 57.5123% ( 41) 00:07:46.881 9578.338 - 9628.751: 57.8433% ( 43) 00:07:46.881 9628.751 - 9679.163: 58.1666% ( 42) 00:07:46.881 9679.163 - 9729.575: 58.4360% ( 35) 00:07:46.881 9729.575 - 9779.988: 58.7131% ( 36) 00:07:46.881 9779.988 - 9830.400: 59.0286% ( 41) 00:07:46.881 9830.400 - 9880.812: 59.3904% ( 47) 00:07:46.881 9880.812 - 9931.225: 59.7368% ( 45) 00:07:46.881 9931.225 - 9981.637: 60.0216% ( 37) 00:07:46.881 9981.637 - 10032.049: 60.3063% ( 37) 00:07:46.881 10032.049 - 10082.462: 60.6142% ( 40) 00:07:46.881 10082.462 - 10132.874: 60.9452% ( 43) 00:07:46.881 10132.874 - 10183.286: 61.2223% ( 36) 00:07:46.881 10183.286 - 10233.698: 61.5071% ( 37) 00:07:46.881 10233.698 - 10284.111: 61.7765% ( 35) 00:07:46.881 10284.111 - 10334.523: 62.0613% ( 37) 00:07:46.881 10334.523 - 10384.935: 62.3153% ( 33) 00:07:46.881 10384.935 - 10435.348: 62.5770% ( 34) 00:07:46.881 10435.348 - 10485.760: 62.8310% ( 33) 00:07:46.881 10485.760 - 10536.172: 63.0542% ( 29) 00:07:46.881 10536.172 - 10586.585: 63.2697% ( 28) 00:07:46.881 10586.585 - 10636.997: 63.4775% ( 27) 00:07:46.881 10636.997 - 10687.409: 63.7084% ( 30) 00:07:46.881 10687.409 - 10737.822: 63.9624% ( 33) 00:07:46.881 10737.822 - 10788.234: 64.1933% ( 30) 00:07:46.881 10788.234 - 10838.646: 64.4320% ( 31) 00:07:46.881 10838.646 - 10889.058: 64.6860% ( 33) 00:07:46.881 10889.058 - 10939.471: 64.9323% ( 32) 00:07:46.881 10939.471 - 10989.883: 65.1940% ( 34) 00:07:46.881 10989.883 - 11040.295: 65.5172% ( 42) 00:07:46.881 11040.295 - 11090.708: 65.8251% ( 40) 00:07:46.881 11090.708 - 11141.120: 66.1869% ( 47) 00:07:46.881 11141.120 - 11191.532: 66.6333% ( 58) 00:07:46.881 11191.532 - 11241.945: 67.1028% ( 61) 00:07:46.881 11241.945 - 11292.357: 67.5416% ( 57) 00:07:46.881 11292.357 - 11342.769: 67.9649% ( 55) 00:07:46.881 11342.769 - 11393.182: 68.4190% ( 59) 00:07:46.881 11393.182 - 11443.594: 68.9039% ( 63) 00:07:46.881 11443.594 - 11494.006: 69.4119% ( 66) 00:07:46.881 11494.006 - 11544.418: 69.9507% ( 70) 00:07:46.881 11544.418 - 11594.831: 70.5203% ( 74) 00:07:46.881 11594.831 - 11645.243: 71.1284% ( 79) 00:07:46.881 11645.243 - 11695.655: 71.6749% ( 71) 00:07:46.881 11695.655 - 11746.068: 72.1521% ( 62) 00:07:46.881 11746.068 - 11796.480: 72.6524% ( 65) 00:07:46.881 11796.480 - 11846.892: 73.1373% ( 63) 00:07:46.881 11846.892 - 11897.305: 73.5760% ( 57) 00:07:46.881 11897.305 - 11947.717: 74.0302% ( 59) 00:07:46.881 11947.717 - 11998.129: 74.4073% ( 49) 00:07:46.881 11998.129 - 12048.542: 74.7922% ( 50) 00:07:46.881 12048.542 - 12098.954: 75.1462% ( 46) 00:07:46.881 12098.954 - 12149.366: 75.5465% ( 52) 00:07:46.881 12149.366 - 12199.778: 75.9236% ( 49) 00:07:46.881 12199.778 - 12250.191: 76.2700% ( 45) 00:07:46.881 12250.191 - 12300.603: 76.6241% ( 46) 00:07:46.881 12300.603 - 12351.015: 76.9550% ( 43) 00:07:46.881 12351.015 - 12401.428: 77.3168% ( 47) 00:07:46.881 12401.428 - 12451.840: 77.7709% ( 59) 00:07:46.881 12451.840 - 12502.252: 78.1789% ( 53) 00:07:46.881 12502.252 - 12552.665: 78.6099% ( 56) 00:07:46.881 12552.665 - 12603.077: 79.0256% ( 54) 00:07:46.881 12603.077 - 12653.489: 79.4874% ( 60) 00:07:46.881 12653.489 - 12703.902: 79.9646% ( 62) 00:07:46.881 12703.902 - 12754.314: 80.5804% ( 80) 00:07:46.881 12754.314 - 12804.726: 81.1884% ( 79) 00:07:46.881 12804.726 - 12855.138: 81.7118% ( 68) 00:07:46.881 12855.138 - 12905.551: 82.2044% ( 64) 00:07:46.881 12905.551 - 13006.375: 83.2050% ( 130) 00:07:46.881 13006.375 - 13107.200: 84.3981% ( 155) 00:07:46.881 13107.200 - 13208.025: 85.6912% ( 168) 00:07:46.881 13208.025 - 13308.849: 86.7919% ( 143) 00:07:46.881 13308.849 - 13409.674: 87.8002% ( 131) 00:07:46.881 13409.674 - 13510.498: 88.6392% ( 109) 00:07:46.881 13510.498 - 13611.323: 89.2934% ( 85) 00:07:46.881 13611.323 - 13712.148: 89.8476% ( 72) 00:07:46.881 13712.148 - 13812.972: 90.4326% ( 76) 00:07:46.881 13812.972 - 13913.797: 90.8559% ( 55) 00:07:46.881 13913.797 - 14014.622: 91.2177% ( 47) 00:07:46.881 14014.622 - 14115.446: 91.6179% ( 52) 00:07:46.881 14115.446 - 14216.271: 92.0259% ( 53) 00:07:46.881 14216.271 - 14317.095: 92.4646% ( 57) 00:07:46.881 14317.095 - 14417.920: 92.8802% ( 54) 00:07:46.881 14417.920 - 14518.745: 93.2805% ( 52) 00:07:46.881 14518.745 - 14619.569: 93.7038% ( 55) 00:07:46.881 14619.569 - 14720.394: 94.0579% ( 46) 00:07:46.881 14720.394 - 14821.218: 94.3966% ( 44) 00:07:46.881 14821.218 - 14922.043: 94.8892% ( 64) 00:07:46.881 14922.043 - 15022.868: 95.2355% ( 45) 00:07:46.881 15022.868 - 15123.692: 95.5665% ( 43) 00:07:46.881 15123.692 - 15224.517: 95.9206% ( 46) 00:07:46.881 15224.517 - 15325.342: 96.1284% ( 27) 00:07:46.881 15325.342 - 15426.166: 96.2977% ( 22) 00:07:46.881 15426.166 - 15526.991: 96.5209% ( 29) 00:07:46.881 15526.991 - 15627.815: 96.6980% ( 23) 00:07:46.881 15627.815 - 15728.640: 96.8365% ( 18) 00:07:46.881 15728.640 - 15829.465: 97.0058% ( 22) 00:07:46.881 15829.465 - 15930.289: 97.1367% ( 17) 00:07:46.881 15930.289 - 16031.114: 97.2829% ( 19) 00:07:46.881 16031.114 - 16131.938: 97.4369% ( 20) 00:07:46.881 16131.938 - 16232.763: 97.5831% ( 19) 00:07:46.881 16232.763 - 16333.588: 97.6678% ( 11) 00:07:46.881 16333.588 - 16434.412: 97.7602% ( 12) 00:07:46.881 16434.412 - 16535.237: 97.8679% ( 14) 00:07:46.881 16535.237 - 16636.062: 98.0065% ( 18) 00:07:46.881 16636.062 - 16736.886: 98.1142% ( 14) 00:07:46.881 16736.886 - 16837.711: 98.1989% ( 11) 00:07:46.881 16837.711 - 16938.535: 98.2451% ( 6) 00:07:46.881 16938.535 - 17039.360: 98.2990% ( 7) 00:07:46.881 17039.360 - 17140.185: 98.3528% ( 7) 00:07:46.881 17140.185 - 17241.009: 98.3836% ( 4) 00:07:46.881 17241.009 - 17341.834: 98.4144% ( 4) 00:07:46.881 17341.834 - 17442.658: 98.4529% ( 5) 00:07:46.881 17442.658 - 17543.483: 98.5068% ( 7) 00:07:46.881 17543.483 - 17644.308: 98.5683% ( 8) 00:07:46.881 17644.308 - 17745.132: 98.6145% ( 6) 00:07:46.881 17745.132 - 17845.957: 98.7146% ( 13) 00:07:46.881 17845.957 - 17946.782: 98.7993% ( 11) 00:07:46.881 17946.782 - 18047.606: 98.8531% ( 7) 00:07:46.881 18047.606 - 18148.431: 98.8762% ( 3) 00:07:46.881 18148.431 - 18249.255: 98.9147% ( 5) 00:07:46.881 18249.255 - 18350.080: 98.9455% ( 4) 00:07:46.881 18350.080 - 18450.905: 98.9917% ( 6) 00:07:46.881 18450.905 - 18551.729: 99.0148% ( 3) 00:07:46.881 19761.625 - 19862.449: 99.0302% ( 2) 00:07:46.881 19862.449 - 19963.274: 99.0687% ( 5) 00:07:46.881 19963.274 - 20064.098: 99.0994% ( 4) 00:07:46.881 20064.098 - 20164.923: 99.1302% ( 4) 00:07:46.881 20164.923 - 20265.748: 99.1610% ( 4) 00:07:46.881 20265.748 - 20366.572: 99.1995% ( 5) 00:07:46.881 20366.572 - 20467.397: 99.2226% ( 3) 00:07:46.881 20467.397 - 20568.222: 99.2611% ( 5) 00:07:46.881 20568.222 - 20669.046: 99.2919% ( 4) 00:07:46.881 20669.046 - 20769.871: 99.3227% ( 4) 00:07:46.881 20769.871 - 20870.695: 99.3611% ( 5) 00:07:46.881 20870.695 - 20971.520: 99.3919% ( 4) 00:07:46.881 20971.520 - 21072.345: 99.4227% ( 4) 00:07:46.881 21072.345 - 21173.169: 99.4535% ( 4) 00:07:46.881 21173.169 - 21273.994: 99.4843% ( 4) 00:07:46.881 21273.994 - 21374.818: 99.5074% ( 3) 00:07:46.881 24903.680 - 25004.505: 99.5382% ( 4) 00:07:46.881 25004.505 - 25105.329: 99.5767% ( 5) 00:07:46.881 25105.329 - 25206.154: 99.6075% ( 4) 00:07:46.881 25206.154 - 25306.978: 99.6382% ( 4) 00:07:46.882 25306.978 - 25407.803: 99.6690% ( 4) 00:07:46.882 25407.803 - 25508.628: 99.6998% ( 4) 00:07:46.882 25508.628 - 25609.452: 99.7306% ( 4) 00:07:46.882 25609.452 - 25710.277: 99.7691% ( 5) 00:07:46.882 25710.277 - 25811.102: 99.7999% ( 4) 00:07:46.882 25811.102 - 26012.751: 99.8615% ( 8) 00:07:46.882 26012.751 - 26214.400: 99.9076% ( 6) 00:07:46.882 26214.400 - 26416.049: 99.9769% ( 9) 00:07:46.882 26416.049 - 26617.698: 100.0000% ( 3) 00:07:46.882 00:07:46.882 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:46.882 ============================================================================== 00:07:46.882 Range in us Cumulative IO count 00:07:46.882 5847.828 - 5873.034: 0.0154% ( 2) 00:07:46.882 5873.034 - 5898.240: 0.0539% ( 5) 00:07:46.882 5898.240 - 5923.446: 0.0770% ( 3) 00:07:46.882 5923.446 - 5948.652: 0.1078% ( 4) 00:07:46.882 5948.652 - 5973.858: 0.1385% ( 4) 00:07:46.882 5973.858 - 5999.065: 0.1693% ( 4) 00:07:46.882 5999.065 - 6024.271: 0.2232% ( 7) 00:07:46.882 6024.271 - 6049.477: 0.2617% ( 5) 00:07:46.882 6049.477 - 6074.683: 0.3695% ( 14) 00:07:46.882 6074.683 - 6099.889: 0.4387% ( 9) 00:07:46.882 6099.889 - 6125.095: 0.4926% ( 7) 00:07:46.882 6125.095 - 6150.302: 0.5619% ( 9) 00:07:46.882 6150.302 - 6175.508: 0.6158% ( 7) 00:07:46.882 6175.508 - 6200.714: 0.6619% ( 6) 00:07:46.882 6200.714 - 6225.920: 0.7697% ( 14) 00:07:46.882 6225.920 - 6251.126: 0.8929% ( 16) 00:07:46.882 6251.126 - 6276.332: 1.0083% ( 15) 00:07:46.882 6276.332 - 6301.538: 1.1315% ( 16) 00:07:46.882 6301.538 - 6326.745: 1.2700% ( 18) 00:07:46.882 6326.745 - 6351.951: 1.4932% ( 29) 00:07:46.882 6351.951 - 6377.157: 1.6472% ( 20) 00:07:46.882 6377.157 - 6402.363: 1.8396% ( 25) 00:07:46.882 6402.363 - 6427.569: 2.0243% ( 24) 00:07:46.882 6427.569 - 6452.775: 2.2014% ( 23) 00:07:46.882 6452.775 - 6503.188: 2.5708% ( 48) 00:07:46.882 6503.188 - 6553.600: 2.9942% ( 55) 00:07:46.882 6553.600 - 6604.012: 3.4945% ( 65) 00:07:46.882 6604.012 - 6654.425: 3.9717% ( 62) 00:07:46.882 6654.425 - 6704.837: 4.5182% ( 71) 00:07:46.882 6704.837 - 6755.249: 5.1031% ( 76) 00:07:46.882 6755.249 - 6805.662: 5.6496% ( 71) 00:07:46.882 6805.662 - 6856.074: 6.2269% ( 75) 00:07:46.882 6856.074 - 6906.486: 6.8427% ( 80) 00:07:46.882 6906.486 - 6956.898: 7.4738% ( 82) 00:07:46.882 6956.898 - 7007.311: 8.1897% ( 93) 00:07:46.882 7007.311 - 7057.723: 8.9825% ( 103) 00:07:46.882 7057.723 - 7108.135: 9.6983% ( 93) 00:07:46.882 7108.135 - 7158.548: 10.5065% ( 105) 00:07:46.882 7158.548 - 7208.960: 11.5533% ( 136) 00:07:46.882 7208.960 - 7259.372: 12.5847% ( 134) 00:07:46.882 7259.372 - 7309.785: 13.8855% ( 169) 00:07:46.882 7309.785 - 7360.197: 15.6404% ( 228) 00:07:46.882 7360.197 - 7410.609: 17.4261% ( 232) 00:07:46.882 7410.609 - 7461.022: 19.4504% ( 263) 00:07:46.882 7461.022 - 7511.434: 21.3978% ( 253) 00:07:46.882 7511.434 - 7561.846: 23.4991% ( 273) 00:07:46.882 7561.846 - 7612.258: 25.8621% ( 307) 00:07:46.882 7612.258 - 7662.671: 28.0326% ( 282) 00:07:46.882 7662.671 - 7713.083: 30.1955% ( 281) 00:07:46.882 7713.083 - 7763.495: 32.4738% ( 296) 00:07:46.882 7763.495 - 7813.908: 34.7291% ( 293) 00:07:46.882 7813.908 - 7864.320: 36.9689% ( 291) 00:07:46.882 7864.320 - 7914.732: 39.1626% ( 285) 00:07:46.882 7914.732 - 7965.145: 41.2485% ( 271) 00:07:46.882 7965.145 - 8015.557: 43.3651% ( 275) 00:07:46.882 8015.557 - 8065.969: 45.2124% ( 240) 00:07:46.882 8065.969 - 8116.382: 46.8519% ( 213) 00:07:46.882 8116.382 - 8166.794: 48.2451% ( 181) 00:07:46.882 8166.794 - 8217.206: 49.3458% ( 143) 00:07:46.882 8217.206 - 8267.618: 50.2232% ( 114) 00:07:46.882 8267.618 - 8318.031: 50.9621% ( 96) 00:07:46.882 8318.031 - 8368.443: 51.5856% ( 81) 00:07:46.882 8368.443 - 8418.855: 52.1090% ( 68) 00:07:46.882 8418.855 - 8469.268: 52.4938% ( 50) 00:07:46.882 8469.268 - 8519.680: 52.8248% ( 43) 00:07:46.882 8519.680 - 8570.092: 53.1481% ( 42) 00:07:46.882 8570.092 - 8620.505: 53.4021% ( 33) 00:07:46.882 8620.505 - 8670.917: 53.6715% ( 35) 00:07:46.882 8670.917 - 8721.329: 53.8716% ( 26) 00:07:46.882 8721.329 - 8771.742: 54.0256% ( 20) 00:07:46.882 8771.742 - 8822.154: 54.1795% ( 20) 00:07:46.882 8822.154 - 8872.566: 54.3026% ( 16) 00:07:46.882 8872.566 - 8922.978: 54.5182% ( 28) 00:07:46.882 8922.978 - 8973.391: 54.6721% ( 20) 00:07:46.882 8973.391 - 9023.803: 54.8568% ( 24) 00:07:46.882 9023.803 - 9074.215: 55.0262% ( 22) 00:07:46.882 9074.215 - 9124.628: 55.2032% ( 23) 00:07:46.882 9124.628 - 9175.040: 55.3571% ( 20) 00:07:46.882 9175.040 - 9225.452: 55.5111% ( 20) 00:07:46.882 9225.452 - 9275.865: 55.7112% ( 26) 00:07:46.882 9275.865 - 9326.277: 55.9421% ( 30) 00:07:46.882 9326.277 - 9376.689: 56.1961% ( 33) 00:07:46.882 9376.689 - 9427.102: 56.4039% ( 27) 00:07:46.882 9427.102 - 9477.514: 56.6425% ( 31) 00:07:46.882 9477.514 - 9527.926: 56.9196% ( 36) 00:07:46.882 9527.926 - 9578.338: 57.2121% ( 38) 00:07:46.882 9578.338 - 9628.751: 57.5585% ( 45) 00:07:46.882 9628.751 - 9679.163: 57.8895% ( 43) 00:07:46.882 9679.163 - 9729.575: 58.1974% ( 40) 00:07:46.882 9729.575 - 9779.988: 58.5360% ( 44) 00:07:46.882 9779.988 - 9830.400: 58.8439% ( 40) 00:07:46.882 9830.400 - 9880.812: 59.1133% ( 35) 00:07:46.882 9880.812 - 9931.225: 59.4289% ( 41) 00:07:46.882 9931.225 - 9981.637: 59.7752% ( 45) 00:07:46.882 9981.637 - 10032.049: 60.1293% ( 46) 00:07:46.882 10032.049 - 10082.462: 60.4834% ( 46) 00:07:46.882 10082.462 - 10132.874: 60.8451% ( 47) 00:07:46.882 10132.874 - 10183.286: 61.2069% ( 47) 00:07:46.882 10183.286 - 10233.698: 61.5841% ( 49) 00:07:46.882 10233.698 - 10284.111: 61.9766% ( 51) 00:07:46.882 10284.111 - 10334.523: 62.3615% ( 50) 00:07:46.882 10334.523 - 10384.935: 62.7001% ( 44) 00:07:46.882 10384.935 - 10435.348: 62.9618% ( 34) 00:07:46.882 10435.348 - 10485.760: 63.2235% ( 34) 00:07:46.882 10485.760 - 10536.172: 63.4698% ( 32) 00:07:46.882 10536.172 - 10586.585: 63.7007% ( 30) 00:07:46.882 10586.585 - 10636.997: 63.9624% ( 34) 00:07:46.882 10636.997 - 10687.409: 64.2164% ( 33) 00:07:46.882 10687.409 - 10737.822: 64.4781% ( 34) 00:07:46.882 10737.822 - 10788.234: 64.7629% ( 37) 00:07:46.882 10788.234 - 10838.646: 65.0400% ( 36) 00:07:46.882 10838.646 - 10889.058: 65.3325% ( 38) 00:07:46.882 10889.058 - 10939.471: 65.6481% ( 41) 00:07:46.882 10939.471 - 10989.883: 66.0483% ( 52) 00:07:46.882 10989.883 - 11040.295: 66.4409% ( 51) 00:07:46.882 11040.295 - 11090.708: 66.8642% ( 55) 00:07:46.882 11090.708 - 11141.120: 67.1952% ( 43) 00:07:46.882 11141.120 - 11191.532: 67.5954% ( 52) 00:07:46.882 11191.532 - 11241.945: 68.0111% ( 54) 00:07:46.882 11241.945 - 11292.357: 68.3344% ( 42) 00:07:46.882 11292.357 - 11342.769: 68.6653% ( 43) 00:07:46.882 11342.769 - 11393.182: 68.9732% ( 40) 00:07:46.882 11393.182 - 11443.594: 69.2888% ( 41) 00:07:46.882 11443.594 - 11494.006: 69.6352% ( 45) 00:07:46.882 11494.006 - 11544.418: 69.9969% ( 47) 00:07:46.882 11544.418 - 11594.831: 70.3972% ( 52) 00:07:46.882 11594.831 - 11645.243: 70.8436% ( 58) 00:07:46.882 11645.243 - 11695.655: 71.3208% ( 62) 00:07:46.882 11695.655 - 11746.068: 71.8442% ( 68) 00:07:46.882 11746.068 - 11796.480: 72.2445% ( 52) 00:07:46.882 11796.480 - 11846.892: 72.6909% ( 58) 00:07:46.882 11846.892 - 11897.305: 73.1758% ( 63) 00:07:46.882 11897.305 - 11947.717: 73.5530% ( 49) 00:07:46.882 11947.717 - 11998.129: 73.8993% ( 45) 00:07:46.882 11998.129 - 12048.542: 74.3611% ( 60) 00:07:46.882 12048.542 - 12098.954: 74.7460% ( 50) 00:07:46.882 12098.954 - 12149.366: 75.1462% ( 52) 00:07:46.882 12149.366 - 12199.778: 75.5157% ( 48) 00:07:46.882 12199.778 - 12250.191: 75.9006% ( 50) 00:07:46.882 12250.191 - 12300.603: 76.3162% ( 54) 00:07:46.882 12300.603 - 12351.015: 76.7318% ( 54) 00:07:46.882 12351.015 - 12401.428: 77.1629% ( 56) 00:07:46.882 12401.428 - 12451.840: 77.6093% ( 58) 00:07:46.882 12451.840 - 12502.252: 78.0942% ( 63) 00:07:46.882 12502.252 - 12552.665: 78.6176% ( 68) 00:07:46.882 12552.665 - 12603.077: 79.0563% ( 57) 00:07:46.882 12603.077 - 12653.489: 79.5182% ( 60) 00:07:46.882 12653.489 - 12703.902: 79.9492% ( 56) 00:07:46.882 12703.902 - 12754.314: 80.3879% ( 57) 00:07:46.882 12754.314 - 12804.726: 80.9421% ( 72) 00:07:46.882 12804.726 - 12855.138: 81.4039% ( 60) 00:07:46.882 12855.138 - 12905.551: 81.8273% ( 55) 00:07:46.882 12905.551 - 13006.375: 82.7894% ( 125) 00:07:46.882 13006.375 - 13107.200: 83.7284% ( 122) 00:07:46.882 13107.200 - 13208.025: 84.6983% ( 126) 00:07:46.882 13208.025 - 13308.849: 85.7220% ( 133) 00:07:46.882 13308.849 - 13409.674: 86.5917% ( 113) 00:07:46.882 13409.674 - 13510.498: 87.4153% ( 107) 00:07:46.882 13510.498 - 13611.323: 88.1235% ( 92) 00:07:46.882 13611.323 - 13712.148: 88.8470% ( 94) 00:07:46.882 13712.148 - 13812.972: 89.5320% ( 89) 00:07:46.882 13812.972 - 13913.797: 90.1786% ( 84) 00:07:46.883 13913.797 - 14014.622: 90.8482% ( 87) 00:07:46.883 14014.622 - 14115.446: 91.4101% ( 73) 00:07:46.883 14115.446 - 14216.271: 91.9874% ( 75) 00:07:46.883 14216.271 - 14317.095: 92.5262% ( 70) 00:07:46.883 14317.095 - 14417.920: 93.0573% ( 69) 00:07:46.883 14417.920 - 14518.745: 93.5422% ( 63) 00:07:46.883 14518.745 - 14619.569: 94.0040% ( 60) 00:07:46.883 14619.569 - 14720.394: 94.3889% ( 50) 00:07:46.883 14720.394 - 14821.218: 94.7275% ( 44) 00:07:46.883 14821.218 - 14922.043: 95.0816% ( 46) 00:07:46.883 14922.043 - 15022.868: 95.5896% ( 66) 00:07:46.883 15022.868 - 15123.692: 95.9591% ( 48) 00:07:46.883 15123.692 - 15224.517: 96.2977% ( 44) 00:07:46.883 15224.517 - 15325.342: 96.5902% ( 38) 00:07:46.883 15325.342 - 15426.166: 96.7826% ( 25) 00:07:46.883 15426.166 - 15526.991: 96.9597% ( 23) 00:07:46.883 15526.991 - 15627.815: 97.1521% ( 25) 00:07:46.883 15627.815 - 15728.640: 97.2983% ( 19) 00:07:46.883 15728.640 - 15829.465: 97.3676% ( 9) 00:07:46.883 15829.465 - 15930.289: 97.4061% ( 5) 00:07:46.883 15930.289 - 16031.114: 97.4600% ( 7) 00:07:46.883 16031.114 - 16131.938: 97.5446% ( 11) 00:07:46.883 16131.938 - 16232.763: 97.6293% ( 11) 00:07:46.883 16232.763 - 16333.588: 97.6832% ( 7) 00:07:46.883 16333.588 - 16434.412: 97.7217% ( 5) 00:07:46.883 16434.412 - 16535.237: 97.7986% ( 10) 00:07:46.883 16535.237 - 16636.062: 97.9141% ( 15) 00:07:46.883 16636.062 - 16736.886: 97.9988% ( 11) 00:07:46.883 16736.886 - 16837.711: 98.0911% ( 12) 00:07:46.883 16837.711 - 16938.535: 98.1912% ( 13) 00:07:46.883 16938.535 - 17039.360: 98.2836% ( 12) 00:07:46.883 17039.360 - 17140.185: 98.3605% ( 10) 00:07:46.883 17140.185 - 17241.009: 98.4067% ( 6) 00:07:46.883 17241.009 - 17341.834: 98.4991% ( 12) 00:07:46.883 17341.834 - 17442.658: 98.6299% ( 17) 00:07:46.883 17442.658 - 17543.483: 98.6992% ( 9) 00:07:46.883 17543.483 - 17644.308: 98.7916% ( 12) 00:07:46.883 17644.308 - 17745.132: 98.8300% ( 5) 00:07:46.883 17745.132 - 17845.957: 98.8762% ( 6) 00:07:46.883 17845.957 - 17946.782: 98.9147% ( 5) 00:07:46.883 17946.782 - 18047.606: 98.9301% ( 2) 00:07:46.883 18047.606 - 18148.431: 98.9994% ( 9) 00:07:46.883 18148.431 - 18249.255: 99.0687% ( 9) 00:07:46.883 18249.255 - 18350.080: 99.0994% ( 4) 00:07:46.883 18350.080 - 18450.905: 99.1379% ( 5) 00:07:46.883 18450.905 - 18551.729: 99.1687% ( 4) 00:07:46.883 18551.729 - 18652.554: 99.1995% ( 4) 00:07:46.883 18652.554 - 18753.378: 99.2303% ( 4) 00:07:46.883 18753.378 - 18854.203: 99.2611% ( 4) 00:07:46.883 18854.203 - 18955.028: 99.2919% ( 4) 00:07:46.883 18955.028 - 19055.852: 99.3227% ( 4) 00:07:46.883 19055.852 - 19156.677: 99.3534% ( 4) 00:07:46.883 19156.677 - 19257.502: 99.3919% ( 5) 00:07:46.883 19257.502 - 19358.326: 99.4227% ( 4) 00:07:46.883 19358.326 - 19459.151: 99.4535% ( 4) 00:07:46.883 19459.151 - 19559.975: 99.4920% ( 5) 00:07:46.883 19559.975 - 19660.800: 99.5074% ( 2) 00:07:46.883 23189.662 - 23290.486: 99.5382% ( 4) 00:07:46.883 23290.486 - 23391.311: 99.5690% ( 4) 00:07:46.883 23391.311 - 23492.135: 99.5998% ( 4) 00:07:46.883 23492.135 - 23592.960: 99.6305% ( 4) 00:07:46.883 23592.960 - 23693.785: 99.6613% ( 4) 00:07:46.883 23693.785 - 23794.609: 99.6998% ( 5) 00:07:46.883 23794.609 - 23895.434: 99.7306% ( 4) 00:07:46.883 23895.434 - 23996.258: 99.7614% ( 4) 00:07:46.883 23996.258 - 24097.083: 99.7922% ( 4) 00:07:46.883 24097.083 - 24197.908: 99.8230% ( 4) 00:07:46.883 24197.908 - 24298.732: 99.8538% ( 4) 00:07:46.883 24298.732 - 24399.557: 99.8845% ( 4) 00:07:46.883 24399.557 - 24500.382: 99.9153% ( 4) 00:07:46.883 24500.382 - 24601.206: 99.9538% ( 5) 00:07:46.883 24601.206 - 24702.031: 99.9769% ( 3) 00:07:46.883 24702.031 - 24802.855: 100.0000% ( 3) 00:07:46.883 00:07:46.883 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:46.883 ============================================================================== 00:07:46.883 Range in us Cumulative IO count 00:07:46.883 5847.828 - 5873.034: 0.0460% ( 6) 00:07:46.883 5873.034 - 5898.240: 0.0689% ( 3) 00:07:46.883 5898.240 - 5923.446: 0.0996% ( 4) 00:07:46.883 5923.446 - 5948.652: 0.1302% ( 4) 00:07:46.883 5948.652 - 5973.858: 0.1608% ( 4) 00:07:46.883 5973.858 - 5999.065: 0.1915% ( 4) 00:07:46.883 5999.065 - 6024.271: 0.2298% ( 5) 00:07:46.883 6024.271 - 6049.477: 0.2987% ( 9) 00:07:46.883 6049.477 - 6074.683: 0.3523% ( 7) 00:07:46.883 6074.683 - 6099.889: 0.4519% ( 13) 00:07:46.883 6099.889 - 6125.095: 0.5285% ( 10) 00:07:46.883 6125.095 - 6150.302: 0.5898% ( 8) 00:07:46.883 6150.302 - 6175.508: 0.6893% ( 13) 00:07:46.883 6175.508 - 6200.714: 0.7659% ( 10) 00:07:46.883 6200.714 - 6225.920: 0.8272% ( 8) 00:07:46.883 6225.920 - 6251.126: 0.9115% ( 11) 00:07:46.883 6251.126 - 6276.332: 1.0263% ( 15) 00:07:46.883 6276.332 - 6301.538: 1.1566% ( 17) 00:07:46.883 6301.538 - 6326.745: 1.3327% ( 23) 00:07:46.883 6326.745 - 6351.951: 1.5089% ( 23) 00:07:46.883 6351.951 - 6377.157: 1.7157% ( 27) 00:07:46.883 6377.157 - 6402.363: 1.8919% ( 23) 00:07:46.883 6402.363 - 6427.569: 2.0527% ( 21) 00:07:46.883 6427.569 - 6452.775: 2.2289% ( 23) 00:07:46.883 6452.775 - 6503.188: 2.6118% ( 50) 00:07:46.883 6503.188 - 6553.600: 3.0178% ( 53) 00:07:46.883 6553.600 - 6604.012: 3.5539% ( 70) 00:07:46.883 6604.012 - 6654.425: 3.9828% ( 56) 00:07:46.883 6654.425 - 6704.837: 4.4730% ( 64) 00:07:46.883 6704.837 - 6755.249: 5.0551% ( 76) 00:07:46.883 6755.249 - 6805.662: 5.6066% ( 72) 00:07:46.883 6805.662 - 6856.074: 6.2347% ( 82) 00:07:46.883 6856.074 - 6906.486: 6.9393% ( 92) 00:07:46.883 6906.486 - 6956.898: 7.6440% ( 92) 00:07:46.883 6956.898 - 7007.311: 8.3180% ( 88) 00:07:46.883 7007.311 - 7057.723: 9.0227% ( 92) 00:07:46.883 7057.723 - 7108.135: 9.7809% ( 99) 00:07:46.883 7108.135 - 7158.548: 10.6694% ( 116) 00:07:46.883 7158.548 - 7208.960: 11.6192% ( 124) 00:07:46.883 7208.960 - 7259.372: 12.7528% ( 148) 00:07:46.883 7259.372 - 7309.785: 14.0319% ( 167) 00:07:46.883 7309.785 - 7360.197: 15.6250% ( 208) 00:07:46.883 7360.197 - 7410.609: 17.4249% ( 235) 00:07:46.883 7410.609 - 7461.022: 19.2938% ( 244) 00:07:46.883 7461.022 - 7511.434: 21.3695% ( 271) 00:07:46.883 7511.434 - 7561.846: 23.5677% ( 287) 00:07:46.883 7561.846 - 7612.258: 25.7047% ( 279) 00:07:46.883 7612.258 - 7662.671: 27.9412% ( 292) 00:07:46.883 7662.671 - 7713.083: 30.1930% ( 294) 00:07:46.883 7713.083 - 7763.495: 32.3989% ( 288) 00:07:46.883 7763.495 - 7813.908: 34.6124% ( 289) 00:07:46.883 7813.908 - 7864.320: 36.8107% ( 287) 00:07:46.883 7864.320 - 7914.732: 39.1085% ( 300) 00:07:46.883 7914.732 - 7965.145: 41.1458% ( 266) 00:07:46.883 7965.145 - 8015.557: 43.1985% ( 268) 00:07:46.883 8015.557 - 8065.969: 45.0444% ( 241) 00:07:46.883 8065.969 - 8116.382: 46.7295% ( 220) 00:07:46.883 8116.382 - 8166.794: 48.0928% ( 178) 00:07:46.883 8166.794 - 8217.206: 49.1498% ( 138) 00:07:46.883 8217.206 - 8267.618: 50.0843% ( 122) 00:07:46.883 8267.618 - 8318.031: 50.8502% ( 100) 00:07:46.883 8318.031 - 8368.443: 51.4017% ( 72) 00:07:46.883 8368.443 - 8418.855: 51.8459% ( 58) 00:07:46.883 8418.855 - 8469.268: 52.2825% ( 57) 00:07:46.883 8469.268 - 8519.680: 52.6808% ( 52) 00:07:46.883 8519.680 - 8570.092: 53.0484% ( 48) 00:07:46.883 8570.092 - 8620.505: 53.3931% ( 45) 00:07:46.883 8620.505 - 8670.917: 53.6535% ( 34) 00:07:46.883 8670.917 - 8721.329: 53.8909% ( 31) 00:07:46.883 8721.329 - 8771.742: 54.0748% ( 24) 00:07:46.883 8771.742 - 8822.154: 54.2662% ( 25) 00:07:46.883 8822.154 - 8872.566: 54.4577% ( 25) 00:07:46.883 8872.566 - 8922.978: 54.6722% ( 28) 00:07:46.883 8922.978 - 8973.391: 54.8943% ( 29) 00:07:46.883 8973.391 - 9023.803: 55.1088% ( 28) 00:07:46.883 9023.803 - 9074.215: 55.3385% ( 30) 00:07:46.883 9074.215 - 9124.628: 55.5990% ( 34) 00:07:46.883 9124.628 - 9175.040: 55.8594% ( 34) 00:07:46.883 9175.040 - 9225.452: 56.1198% ( 34) 00:07:46.883 9225.452 - 9275.865: 56.3649% ( 32) 00:07:46.883 9275.865 - 9326.277: 56.5640% ( 26) 00:07:46.883 9326.277 - 9376.689: 56.7938% ( 30) 00:07:46.883 9376.689 - 9427.102: 56.9623% ( 22) 00:07:46.883 9427.102 - 9477.514: 57.1232% ( 21) 00:07:46.883 9477.514 - 9527.926: 57.2993% ( 23) 00:07:46.883 9527.926 - 9578.338: 57.4142% ( 15) 00:07:46.883 9578.338 - 9628.751: 57.5444% ( 17) 00:07:46.883 9628.751 - 9679.163: 57.6440% ( 13) 00:07:46.883 9679.163 - 9729.575: 57.8202% ( 23) 00:07:46.883 9729.575 - 9779.988: 58.0116% ( 25) 00:07:46.883 9779.988 - 9830.400: 58.1955% ( 24) 00:07:46.883 9830.400 - 9880.812: 58.4176% ( 29) 00:07:46.883 9880.812 - 9931.225: 58.7240% ( 40) 00:07:46.883 9931.225 - 9981.637: 58.9537% ( 30) 00:07:46.883 9981.637 - 10032.049: 59.2525% ( 39) 00:07:46.883 10032.049 - 10082.462: 59.5741% ( 42) 00:07:46.883 10082.462 - 10132.874: 59.9188% ( 45) 00:07:46.883 10132.874 - 10183.286: 60.3018% ( 50) 00:07:46.883 10183.286 - 10233.698: 60.6924% ( 51) 00:07:46.884 10233.698 - 10284.111: 61.0983% ( 53) 00:07:46.884 10284.111 - 10334.523: 61.4890% ( 51) 00:07:46.884 10334.523 - 10384.935: 61.8566% ( 48) 00:07:46.884 10384.935 - 10435.348: 62.2549% ( 52) 00:07:46.884 10435.348 - 10485.760: 62.6455% ( 51) 00:07:46.884 10485.760 - 10536.172: 63.0515% ( 53) 00:07:46.884 10536.172 - 10586.585: 63.4498% ( 52) 00:07:46.884 10586.585 - 10636.997: 63.8480% ( 52) 00:07:46.884 10636.997 - 10687.409: 64.2387% ( 51) 00:07:46.884 10687.409 - 10737.822: 64.5987% ( 47) 00:07:46.884 10737.822 - 10788.234: 64.9357% ( 44) 00:07:46.884 10788.234 - 10838.646: 65.2880% ( 46) 00:07:46.884 10838.646 - 10889.058: 65.6327% ( 45) 00:07:46.884 10889.058 - 10939.471: 66.0003% ( 48) 00:07:46.884 10939.471 - 10989.883: 66.3526% ( 46) 00:07:46.884 10989.883 - 11040.295: 66.6590% ( 40) 00:07:46.884 11040.295 - 11090.708: 66.9577% ( 39) 00:07:46.884 11090.708 - 11141.120: 67.3024% ( 45) 00:07:46.884 11141.120 - 11191.532: 67.6624% ( 47) 00:07:46.884 11191.532 - 11241.945: 68.1143% ( 59) 00:07:46.884 11241.945 - 11292.357: 68.5968% ( 63) 00:07:46.884 11292.357 - 11342.769: 69.0717% ( 62) 00:07:46.884 11342.769 - 11393.182: 69.5389% ( 61) 00:07:46.884 11393.182 - 11443.594: 69.9525% ( 54) 00:07:46.884 11443.594 - 11494.006: 70.3891% ( 57) 00:07:46.884 11494.006 - 11544.418: 70.8563% ( 61) 00:07:46.884 11544.418 - 11594.831: 71.3006% ( 58) 00:07:46.884 11594.831 - 11645.243: 71.7371% ( 57) 00:07:46.884 11645.243 - 11695.655: 72.2197% ( 63) 00:07:46.884 11695.655 - 11746.068: 72.7252% ( 66) 00:07:46.884 11746.068 - 11796.480: 73.1771% ( 59) 00:07:46.884 11796.480 - 11846.892: 73.6366% ( 60) 00:07:46.884 11846.892 - 11897.305: 74.1192% ( 63) 00:07:46.884 11897.305 - 11947.717: 74.6170% ( 65) 00:07:46.884 11947.717 - 11998.129: 75.1762% ( 73) 00:07:46.884 11998.129 - 12048.542: 75.6127% ( 57) 00:07:46.884 12048.542 - 12098.954: 76.0876% ( 62) 00:07:46.884 12098.954 - 12149.366: 76.4553% ( 48) 00:07:46.884 12149.366 - 12199.778: 76.8153% ( 47) 00:07:46.884 12199.778 - 12250.191: 77.1293% ( 41) 00:07:46.884 12250.191 - 12300.603: 77.4586% ( 43) 00:07:46.884 12300.603 - 12351.015: 77.8569% ( 52) 00:07:46.884 12351.015 - 12401.428: 78.1863% ( 43) 00:07:46.884 12401.428 - 12451.840: 78.5309% ( 45) 00:07:46.884 12451.840 - 12502.252: 78.8220% ( 38) 00:07:46.884 12502.252 - 12552.665: 79.1437% ( 42) 00:07:46.884 12552.665 - 12603.077: 79.5190% ( 49) 00:07:46.884 12603.077 - 12653.489: 79.8100% ( 38) 00:07:46.884 12653.489 - 12703.902: 80.1241% ( 41) 00:07:46.884 12703.902 - 12754.314: 80.4611% ( 44) 00:07:46.884 12754.314 - 12804.726: 80.8287% ( 48) 00:07:46.884 12804.726 - 12855.138: 81.2347% ( 53) 00:07:46.884 12855.138 - 12905.551: 81.6023% ( 48) 00:07:46.884 12905.551 - 13006.375: 82.6363% ( 135) 00:07:46.884 13006.375 - 13107.200: 83.4406% ( 105) 00:07:46.884 13107.200 - 13208.025: 84.2448% ( 105) 00:07:46.884 13208.025 - 13308.849: 85.3018% ( 138) 00:07:46.884 13308.849 - 13409.674: 86.3128% ( 132) 00:07:46.884 13409.674 - 13510.498: 87.2472% ( 122) 00:07:46.884 13510.498 - 13611.323: 88.1127% ( 113) 00:07:46.884 13611.323 - 13712.148: 88.9170% ( 105) 00:07:46.884 13712.148 - 13812.972: 89.7978% ( 115) 00:07:46.884 13812.972 - 13913.797: 90.4795% ( 89) 00:07:46.884 13913.797 - 14014.622: 91.1918% ( 93) 00:07:46.884 14014.622 - 14115.446: 91.8658% ( 88) 00:07:46.884 14115.446 - 14216.271: 92.5092% ( 84) 00:07:46.884 14216.271 - 14317.095: 93.0913% ( 76) 00:07:46.884 14317.095 - 14417.920: 93.6198% ( 69) 00:07:46.884 14417.920 - 14518.745: 94.2249% ( 79) 00:07:46.884 14518.745 - 14619.569: 94.9219% ( 91) 00:07:46.884 14619.569 - 14720.394: 95.4504% ( 69) 00:07:46.884 14720.394 - 14821.218: 95.9176% ( 61) 00:07:46.884 14821.218 - 14922.043: 96.2852% ( 48) 00:07:46.884 14922.043 - 15022.868: 96.5839% ( 39) 00:07:46.884 15022.868 - 15123.692: 96.9056% ( 42) 00:07:46.884 15123.692 - 15224.517: 97.1890% ( 37) 00:07:46.884 15224.517 - 15325.342: 97.4341% ( 32) 00:07:46.884 15325.342 - 15426.166: 97.5567% ( 16) 00:07:46.884 15426.166 - 15526.991: 97.6639% ( 14) 00:07:46.884 15526.991 - 15627.815: 97.7711% ( 14) 00:07:46.884 15627.815 - 15728.640: 97.8631% ( 12) 00:07:46.884 15728.640 - 15829.465: 97.9703% ( 14) 00:07:46.884 15829.465 - 15930.289: 98.0316% ( 8) 00:07:46.884 15930.289 - 16031.114: 98.0392% ( 1) 00:07:46.884 16232.763 - 16333.588: 98.0775% ( 5) 00:07:46.884 16333.588 - 16434.412: 98.1311% ( 7) 00:07:46.884 16434.412 - 16535.237: 98.1771% ( 6) 00:07:46.884 16535.237 - 16636.062: 98.2307% ( 7) 00:07:46.884 16636.062 - 16736.886: 98.2767% ( 6) 00:07:46.884 16736.886 - 16837.711: 98.3150% ( 5) 00:07:46.884 16837.711 - 16938.535: 98.3532% ( 5) 00:07:46.884 16938.535 - 17039.360: 98.4145% ( 8) 00:07:46.884 17039.360 - 17140.185: 98.5141% ( 13) 00:07:46.884 17140.185 - 17241.009: 98.6673% ( 20) 00:07:46.884 17241.009 - 17341.834: 98.8664% ( 26) 00:07:46.884 17341.834 - 17442.658: 98.9737% ( 14) 00:07:46.884 17442.658 - 17543.483: 99.0885% ( 15) 00:07:46.884 17543.483 - 17644.308: 99.2111% ( 16) 00:07:46.884 17644.308 - 17745.132: 99.3336% ( 16) 00:07:46.884 17745.132 - 17845.957: 99.4562% ( 16) 00:07:46.884 17845.957 - 17946.782: 99.5404% ( 11) 00:07:46.884 17946.782 - 18047.606: 99.6324% ( 12) 00:07:46.884 18047.606 - 18148.431: 99.7013% ( 9) 00:07:46.884 18148.431 - 18249.255: 99.7319% ( 4) 00:07:46.884 18249.255 - 18350.080: 99.7626% ( 4) 00:07:46.884 18350.080 - 18450.905: 99.8009% ( 5) 00:07:46.884 18450.905 - 18551.729: 99.8315% ( 4) 00:07:46.884 18551.729 - 18652.554: 99.8621% ( 4) 00:07:46.884 18652.554 - 18753.378: 99.8928% ( 4) 00:07:46.884 18753.378 - 18854.203: 99.9234% ( 4) 00:07:46.884 18854.203 - 18955.028: 99.9617% ( 5) 00:07:46.884 18955.028 - 19055.852: 99.9923% ( 4) 00:07:46.884 19055.852 - 19156.677: 100.0000% ( 1) 00:07:46.884 00:07:46.884 18:17:05 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:07:47.849 Initializing NVMe Controllers 00:07:47.849 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:47.849 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:47.849 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:47.849 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:47.849 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:47.849 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:47.849 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:47.849 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:47.849 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:47.849 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:47.849 Initialization complete. Launching workers. 00:07:47.849 ======================================================== 00:07:47.849 Latency(us) 00:07:47.849 Device Information : IOPS MiB/s Average min max 00:07:47.849 PCIE (0000:00:13.0) NSID 1 from core 0: 8635.54 101.20 14850.16 10171.44 43860.66 00:07:47.849 PCIE (0000:00:10.0) NSID 1 from core 0: 8635.54 101.20 14823.94 9776.90 42710.42 00:07:47.849 PCIE (0000:00:11.0) NSID 1 from core 0: 8635.54 101.20 14796.15 10373.61 40871.92 00:07:47.849 PCIE (0000:00:12.0) NSID 1 from core 0: 8635.54 101.20 14769.80 9983.85 40365.06 00:07:47.849 PCIE (0000:00:12.0) NSID 2 from core 0: 8635.54 101.20 14743.35 9971.40 38694.44 00:07:47.849 PCIE (0000:00:12.0) NSID 3 from core 0: 8699.51 101.95 14608.80 10429.92 29054.65 00:07:47.849 ======================================================== 00:07:47.849 Total : 51877.23 607.94 14765.17 9776.90 43860.66 00:07:47.849 00:07:47.849 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:47.849 ================================================================================= 00:07:47.849 1.00000% : 11141.120us 00:07:47.849 10.00000% : 12351.015us 00:07:47.849 25.00000% : 13611.323us 00:07:47.849 50.00000% : 14317.095us 00:07:47.849 75.00000% : 15123.692us 00:07:47.849 90.00000% : 17140.185us 00:07:47.849 95.00000% : 18551.729us 00:07:47.849 98.00000% : 22685.538us 00:07:47.849 99.00000% : 34280.369us 00:07:47.849 99.50000% : 42346.338us 00:07:47.849 99.90000% : 43556.234us 00:07:47.849 99.99000% : 43959.532us 00:07:47.849 99.99900% : 43959.532us 00:07:47.849 99.99990% : 43959.532us 00:07:47.849 99.99999% : 43959.532us 00:07:47.849 00:07:47.849 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:47.849 ================================================================================= 00:07:47.849 1.00000% : 10989.883us 00:07:47.849 10.00000% : 12401.428us 00:07:47.849 25.00000% : 13611.323us 00:07:47.849 50.00000% : 14317.095us 00:07:47.849 75.00000% : 15224.517us 00:07:47.849 90.00000% : 17241.009us 00:07:47.849 95.00000% : 18450.905us 00:07:47.849 98.00000% : 22988.012us 00:07:47.849 99.00000% : 31860.578us 00:07:47.849 99.50000% : 41136.443us 00:07:47.849 99.90000% : 42547.988us 00:07:47.849 99.99000% : 42749.637us 00:07:47.849 99.99900% : 42749.637us 00:07:47.849 99.99990% : 42749.637us 00:07:47.849 99.99999% : 42749.637us 00:07:47.849 00:07:47.849 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:47.849 ================================================================================= 00:07:47.849 1.00000% : 10737.822us 00:07:47.849 10.00000% : 12300.603us 00:07:47.849 25.00000% : 13611.323us 00:07:47.849 50.00000% : 14417.920us 00:07:47.849 75.00000% : 15123.692us 00:07:47.849 90.00000% : 17039.360us 00:07:47.849 95.00000% : 18450.905us 00:07:47.849 98.00000% : 23290.486us 00:07:47.849 99.00000% : 30247.385us 00:07:47.849 99.50000% : 39321.600us 00:07:47.849 99.90000% : 40733.145us 00:07:47.849 99.99000% : 40934.794us 00:07:47.849 99.99900% : 40934.794us 00:07:47.849 99.99990% : 40934.794us 00:07:47.849 99.99999% : 40934.794us 00:07:47.849 00:07:47.849 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:47.849 ================================================================================= 00:07:47.849 1.00000% : 10687.409us 00:07:47.849 10.00000% : 12401.428us 00:07:47.849 25.00000% : 13611.323us 00:07:47.849 50.00000% : 14317.095us 00:07:47.849 75.00000% : 15224.517us 00:07:47.849 90.00000% : 17039.360us 00:07:47.849 95.00000% : 18753.378us 00:07:47.849 98.00000% : 23189.662us 00:07:47.849 99.00000% : 29844.086us 00:07:47.849 99.50000% : 39119.951us 00:07:47.849 99.90000% : 40128.197us 00:07:47.849 99.99000% : 40531.495us 00:07:47.849 99.99900% : 40531.495us 00:07:47.849 99.99990% : 40531.495us 00:07:47.849 99.99999% : 40531.495us 00:07:47.849 00:07:47.849 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:47.849 ================================================================================= 00:07:47.849 1.00000% : 10788.234us 00:07:47.849 10.00000% : 12199.778us 00:07:47.849 25.00000% : 13611.323us 00:07:47.849 50.00000% : 14317.095us 00:07:47.849 75.00000% : 15224.517us 00:07:47.849 90.00000% : 17241.009us 00:07:47.849 95.00000% : 18955.028us 00:07:47.849 98.00000% : 22786.363us 00:07:47.849 99.00000% : 28634.191us 00:07:47.849 99.50000% : 37506.757us 00:07:47.849 99.90000% : 38515.003us 00:07:47.849 99.99000% : 38716.652us 00:07:47.849 99.99900% : 38716.652us 00:07:47.849 99.99990% : 38716.652us 00:07:47.849 99.99999% : 38716.652us 00:07:47.849 00:07:47.849 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:47.849 ================================================================================= 00:07:47.849 1.00000% : 10939.471us 00:07:47.849 10.00000% : 12250.191us 00:07:47.849 25.00000% : 13611.323us 00:07:47.849 50.00000% : 14317.095us 00:07:47.849 75.00000% : 15123.692us 00:07:47.849 90.00000% : 17241.009us 00:07:47.849 95.00000% : 18854.203us 00:07:47.849 98.00000% : 21878.942us 00:07:47.849 99.00000% : 22584.714us 00:07:47.849 99.50000% : 27827.594us 00:07:47.849 99.90000% : 28835.840us 00:07:47.849 99.99000% : 29239.138us 00:07:47.849 99.99900% : 29239.138us 00:07:47.849 99.99990% : 29239.138us 00:07:47.849 99.99999% : 29239.138us 00:07:47.849 00:07:47.849 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:47.849 ============================================================================== 00:07:47.849 Range in us Cumulative IO count 00:07:47.849 10132.874 - 10183.286: 0.0116% ( 1) 00:07:47.849 10233.698 - 10284.111: 0.0231% ( 1) 00:07:47.849 10334.523 - 10384.935: 0.0463% ( 2) 00:07:47.849 10384.935 - 10435.348: 0.0810% ( 3) 00:07:47.849 10435.348 - 10485.760: 0.1157% ( 3) 00:07:47.849 10485.760 - 10536.172: 0.1852% ( 6) 00:07:47.849 10536.172 - 10586.585: 0.2546% ( 6) 00:07:47.849 10586.585 - 10636.997: 0.3472% ( 8) 00:07:47.849 10636.997 - 10687.409: 0.4745% ( 11) 00:07:47.849 10687.409 - 10737.822: 0.5324% ( 5) 00:07:47.849 10737.822 - 10788.234: 0.5671% ( 3) 00:07:47.849 10788.234 - 10838.646: 0.6019% ( 3) 00:07:47.849 10838.646 - 10889.058: 0.6481% ( 4) 00:07:47.849 10889.058 - 10939.471: 0.6829% ( 3) 00:07:47.849 10939.471 - 10989.883: 0.7176% ( 3) 00:07:47.849 10989.883 - 11040.295: 0.7870% ( 6) 00:07:47.849 11040.295 - 11090.708: 0.8681% ( 7) 00:07:47.849 11090.708 - 11141.120: 1.0880% ( 19) 00:07:47.849 11141.120 - 11191.532: 1.3079% ( 19) 00:07:47.849 11191.532 - 11241.945: 1.7014% ( 34) 00:07:47.849 11241.945 - 11292.357: 2.0602% ( 31) 00:07:47.849 11292.357 - 11342.769: 2.5000% ( 38) 00:07:47.849 11342.769 - 11393.182: 3.0093% ( 44) 00:07:47.849 11393.182 - 11443.594: 3.6574% ( 56) 00:07:47.849 11443.594 - 11494.006: 4.2245% ( 49) 00:07:47.849 11494.006 - 11544.418: 4.9769% ( 65) 00:07:47.849 11544.418 - 11594.831: 5.4514% ( 41) 00:07:47.849 11594.831 - 11645.243: 5.8218% ( 32) 00:07:47.849 11645.243 - 11695.655: 6.2037% ( 33) 00:07:47.849 11695.655 - 11746.068: 6.5394% ( 29) 00:07:47.849 11746.068 - 11796.480: 6.9676% ( 37) 00:07:47.849 11796.480 - 11846.892: 7.3032% ( 29) 00:07:47.849 11846.892 - 11897.305: 7.6620% ( 31) 00:07:47.849 11897.305 - 11947.717: 7.8819% ( 19) 00:07:47.849 11947.717 - 11998.129: 8.0787% ( 17) 00:07:47.849 11998.129 - 12048.542: 8.3333% ( 22) 00:07:47.849 12048.542 - 12098.954: 8.6458% ( 27) 00:07:47.849 12098.954 - 12149.366: 8.9236% ( 24) 00:07:47.849 12149.366 - 12199.778: 9.2940% ( 32) 00:07:47.849 12199.778 - 12250.191: 9.6181% ( 28) 00:07:47.849 12250.191 - 12300.603: 9.9190% ( 26) 00:07:47.849 12300.603 - 12351.015: 10.1620% ( 21) 00:07:47.849 12351.015 - 12401.428: 10.4051% ( 21) 00:07:47.849 12401.428 - 12451.840: 10.8102% ( 35) 00:07:47.849 12451.840 - 12502.252: 11.1111% ( 26) 00:07:47.849 12502.252 - 12552.665: 11.3773% ( 23) 00:07:47.849 12552.665 - 12603.077: 11.5625% ( 16) 00:07:47.849 12603.077 - 12653.489: 11.7361% ( 15) 00:07:47.849 12653.489 - 12703.902: 11.8519% ( 10) 00:07:47.849 12703.902 - 12754.314: 11.9329% ( 7) 00:07:47.849 12754.314 - 12804.726: 12.0486% ( 10) 00:07:47.849 12804.726 - 12855.138: 12.2569% ( 18) 00:07:47.849 12855.138 - 12905.551: 12.4653% ( 18) 00:07:47.849 12905.551 - 13006.375: 13.4606% ( 86) 00:07:47.849 13006.375 - 13107.200: 14.9421% ( 128) 00:07:47.849 13107.200 - 13208.025: 16.6551% ( 148) 00:07:47.849 13208.025 - 13308.849: 19.1551% ( 216) 00:07:47.849 13308.849 - 13409.674: 21.6435% ( 215) 00:07:47.849 13409.674 - 13510.498: 23.7037% ( 178) 00:07:47.849 13510.498 - 13611.323: 26.1343% ( 210) 00:07:47.849 13611.323 - 13712.148: 29.2593% ( 270) 00:07:47.849 13712.148 - 13812.972: 32.7546% ( 302) 00:07:47.849 13812.972 - 13913.797: 36.1921% ( 297) 00:07:47.849 13913.797 - 14014.622: 39.8727% ( 318) 00:07:47.849 14014.622 - 14115.446: 43.5995% ( 322) 00:07:47.849 14115.446 - 14216.271: 47.1759% ( 309) 00:07:47.849 14216.271 - 14317.095: 50.2315% ( 264) 00:07:47.849 14317.095 - 14417.920: 53.1134% ( 249) 00:07:47.849 14417.920 - 14518.745: 56.1806% ( 265) 00:07:47.849 14518.745 - 14619.569: 59.4213% ( 280) 00:07:47.850 14619.569 - 14720.394: 62.6505% ( 279) 00:07:47.850 14720.394 - 14821.218: 65.8449% ( 276) 00:07:47.850 14821.218 - 14922.043: 69.7222% ( 335) 00:07:47.850 14922.043 - 15022.868: 72.9051% ( 275) 00:07:47.850 15022.868 - 15123.692: 75.1620% ( 195) 00:07:47.850 15123.692 - 15224.517: 76.7130% ( 134) 00:07:47.850 15224.517 - 15325.342: 78.4375% ( 149) 00:07:47.850 15325.342 - 15426.166: 80.4282% ( 172) 00:07:47.850 15426.166 - 15526.991: 81.6319% ( 104) 00:07:47.850 15526.991 - 15627.815: 82.6042% ( 84) 00:07:47.850 15627.815 - 15728.640: 83.5185% ( 79) 00:07:47.850 15728.640 - 15829.465: 84.1782% ( 57) 00:07:47.850 15829.465 - 15930.289: 84.5486% ( 32) 00:07:47.850 15930.289 - 16031.114: 84.7917% ( 21) 00:07:47.850 16031.114 - 16131.938: 84.9884% ( 17) 00:07:47.850 16131.938 - 16232.763: 85.2431% ( 22) 00:07:47.850 16232.763 - 16333.588: 85.6134% ( 32) 00:07:47.850 16333.588 - 16434.412: 86.1806% ( 49) 00:07:47.850 16434.412 - 16535.237: 86.6204% ( 38) 00:07:47.850 16535.237 - 16636.062: 87.1412% ( 45) 00:07:47.850 16636.062 - 16736.886: 87.5810% ( 38) 00:07:47.850 16736.886 - 16837.711: 88.3218% ( 64) 00:07:47.850 16837.711 - 16938.535: 88.9699% ( 56) 00:07:47.850 16938.535 - 17039.360: 89.6644% ( 60) 00:07:47.850 17039.360 - 17140.185: 90.2778% ( 53) 00:07:47.850 17140.185 - 17241.009: 90.8796% ( 52) 00:07:47.850 17241.009 - 17341.834: 91.4699% ( 51) 00:07:47.850 17341.834 - 17442.658: 91.9213% ( 39) 00:07:47.850 17442.658 - 17543.483: 92.2454% ( 28) 00:07:47.850 17543.483 - 17644.308: 92.5810% ( 29) 00:07:47.850 17644.308 - 17745.132: 92.8588% ( 24) 00:07:47.850 17745.132 - 17845.957: 93.1366% ( 24) 00:07:47.850 17845.957 - 17946.782: 93.4028% ( 23) 00:07:47.850 17946.782 - 18047.606: 93.8079% ( 35) 00:07:47.850 18047.606 - 18148.431: 94.0856% ( 24) 00:07:47.850 18148.431 - 18249.255: 94.3634% ( 24) 00:07:47.850 18249.255 - 18350.080: 94.5833% ( 19) 00:07:47.850 18350.080 - 18450.905: 94.9421% ( 31) 00:07:47.850 18450.905 - 18551.729: 95.2778% ( 29) 00:07:47.850 18551.729 - 18652.554: 95.4398% ( 14) 00:07:47.850 18652.554 - 18753.378: 95.7523% ( 27) 00:07:47.850 18753.378 - 18854.203: 95.9606% ( 18) 00:07:47.850 18854.203 - 18955.028: 96.0880% ( 11) 00:07:47.850 18955.028 - 19055.852: 96.2037% ( 10) 00:07:47.850 19055.852 - 19156.677: 96.2500% ( 4) 00:07:47.850 19156.677 - 19257.502: 96.2963% ( 4) 00:07:47.850 19358.326 - 19459.151: 96.3079% ( 1) 00:07:47.850 19559.975 - 19660.800: 96.3657% ( 5) 00:07:47.850 19660.800 - 19761.625: 96.4468% ( 7) 00:07:47.850 19761.625 - 19862.449: 96.6088% ( 14) 00:07:47.850 19862.449 - 19963.274: 96.8287% ( 19) 00:07:47.850 19963.274 - 20064.098: 96.9444% ( 10) 00:07:47.850 20064.098 - 20164.923: 97.0139% ( 6) 00:07:47.850 20164.923 - 20265.748: 97.0370% ( 2) 00:07:47.850 21778.117 - 21878.942: 97.0602% ( 2) 00:07:47.850 21878.942 - 21979.766: 97.0718% ( 1) 00:07:47.850 21979.766 - 22080.591: 97.1528% ( 7) 00:07:47.850 22080.591 - 22181.415: 97.2338% ( 7) 00:07:47.850 22181.415 - 22282.240: 97.4306% ( 17) 00:07:47.850 22282.240 - 22383.065: 97.6505% ( 19) 00:07:47.850 22383.065 - 22483.889: 97.8588% ( 18) 00:07:47.850 22483.889 - 22584.714: 97.9745% ( 10) 00:07:47.850 22584.714 - 22685.538: 98.1134% ( 12) 00:07:47.850 22685.538 - 22786.363: 98.1944% ( 7) 00:07:47.850 22786.363 - 22887.188: 98.2523% ( 5) 00:07:47.850 22887.188 - 22988.012: 98.3218% ( 6) 00:07:47.850 22988.012 - 23088.837: 98.3681% ( 4) 00:07:47.850 23088.837 - 23189.662: 98.4375% ( 6) 00:07:47.850 23189.662 - 23290.486: 98.4954% ( 5) 00:07:47.850 23290.486 - 23391.311: 98.5185% ( 2) 00:07:47.850 32667.175 - 32868.825: 98.5301% ( 1) 00:07:47.850 32868.825 - 33070.474: 98.6458% ( 10) 00:07:47.850 33070.474 - 33272.123: 98.7384% ( 8) 00:07:47.850 33272.123 - 33473.772: 98.7963% ( 5) 00:07:47.850 33473.772 - 33675.422: 98.8542% ( 5) 00:07:47.850 33675.422 - 33877.071: 98.8889% ( 3) 00:07:47.850 33877.071 - 34078.720: 98.9120% ( 2) 00:07:47.850 34078.720 - 34280.369: 99.0046% ( 8) 00:07:47.850 34280.369 - 34482.018: 99.0856% ( 7) 00:07:47.850 34482.018 - 34683.668: 99.1667% ( 7) 00:07:47.850 34683.668 - 34885.317: 99.2477% ( 7) 00:07:47.850 34885.317 - 35086.966: 99.2593% ( 1) 00:07:47.850 40934.794 - 41136.443: 99.2824% ( 2) 00:07:47.850 41136.443 - 41338.092: 99.3171% ( 3) 00:07:47.850 41338.092 - 41539.742: 99.4444% ( 11) 00:07:47.850 41943.040 - 42144.689: 99.4676% ( 2) 00:07:47.850 42144.689 - 42346.338: 99.5139% ( 4) 00:07:47.850 42346.338 - 42547.988: 99.5370% ( 2) 00:07:47.850 42547.988 - 42749.637: 99.5718% ( 3) 00:07:47.850 42749.637 - 42951.286: 99.7338% ( 14) 00:07:47.850 42951.286 - 43152.935: 99.7801% ( 4) 00:07:47.850 43152.935 - 43354.585: 99.8495% ( 6) 00:07:47.850 43354.585 - 43556.234: 99.9074% ( 5) 00:07:47.850 43556.234 - 43757.883: 99.9769% ( 6) 00:07:47.850 43757.883 - 43959.532: 100.0000% ( 2) 00:07:47.850 00:07:47.850 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:47.850 ============================================================================== 00:07:47.850 Range in us Cumulative IO count 00:07:47.850 9729.575 - 9779.988: 0.0116% ( 1) 00:07:47.850 9981.637 - 10032.049: 0.0347% ( 2) 00:07:47.850 10032.049 - 10082.462: 0.0926% ( 5) 00:07:47.850 10082.462 - 10132.874: 0.2662% ( 15) 00:07:47.850 10132.874 - 10183.286: 0.2894% ( 2) 00:07:47.850 10183.286 - 10233.698: 0.3472% ( 5) 00:07:47.850 10284.111 - 10334.523: 0.3588% ( 1) 00:07:47.850 10334.523 - 10384.935: 0.3704% ( 1) 00:07:47.850 10384.935 - 10435.348: 0.3819% ( 1) 00:07:47.850 10435.348 - 10485.760: 0.4167% ( 3) 00:07:47.850 10485.760 - 10536.172: 0.4282% ( 1) 00:07:47.850 10536.172 - 10586.585: 0.4398% ( 1) 00:07:47.850 10586.585 - 10636.997: 0.4630% ( 2) 00:07:47.850 10636.997 - 10687.409: 0.4977% ( 3) 00:07:47.850 10687.409 - 10737.822: 0.5324% ( 3) 00:07:47.850 10737.822 - 10788.234: 0.5556% ( 2) 00:07:47.850 10788.234 - 10838.646: 0.5787% ( 2) 00:07:47.850 10838.646 - 10889.058: 0.7407% ( 14) 00:07:47.850 10889.058 - 10939.471: 0.9028% ( 14) 00:07:47.850 10939.471 - 10989.883: 1.1343% ( 20) 00:07:47.850 10989.883 - 11040.295: 1.2616% ( 11) 00:07:47.850 11040.295 - 11090.708: 1.4352% ( 15) 00:07:47.850 11090.708 - 11141.120: 1.7593% ( 28) 00:07:47.850 11141.120 - 11191.532: 2.1181% ( 31) 00:07:47.850 11191.532 - 11241.945: 2.4653% ( 30) 00:07:47.850 11241.945 - 11292.357: 2.6389% ( 15) 00:07:47.850 11292.357 - 11342.769: 2.8356% ( 17) 00:07:47.850 11342.769 - 11393.182: 3.0208% ( 16) 00:07:47.850 11393.182 - 11443.594: 3.1134% ( 8) 00:07:47.850 11443.594 - 11494.006: 3.3449% ( 20) 00:07:47.850 11494.006 - 11544.418: 3.5417% ( 17) 00:07:47.850 11544.418 - 11594.831: 3.9005% ( 31) 00:07:47.850 11594.831 - 11645.243: 4.2361% ( 29) 00:07:47.850 11645.243 - 11695.655: 4.5023% ( 23) 00:07:47.850 11695.655 - 11746.068: 4.7917% ( 25) 00:07:47.850 11746.068 - 11796.480: 5.2894% ( 43) 00:07:47.850 11796.480 - 11846.892: 5.7292% ( 38) 00:07:47.850 11846.892 - 11897.305: 6.0764% ( 30) 00:07:47.850 11897.305 - 11947.717: 6.4815% ( 35) 00:07:47.850 11947.717 - 11998.129: 6.8171% ( 29) 00:07:47.850 11998.129 - 12048.542: 7.3264% ( 44) 00:07:47.850 12048.542 - 12098.954: 7.6273% ( 26) 00:07:47.850 12098.954 - 12149.366: 8.0324% ( 35) 00:07:47.850 12149.366 - 12199.778: 8.5185% ( 42) 00:07:47.850 12199.778 - 12250.191: 8.9236% ( 35) 00:07:47.850 12250.191 - 12300.603: 9.4329% ( 44) 00:07:47.850 12300.603 - 12351.015: 9.9190% ( 42) 00:07:47.850 12351.015 - 12401.428: 10.4630% ( 47) 00:07:47.850 12401.428 - 12451.840: 10.9606% ( 43) 00:07:47.850 12451.840 - 12502.252: 11.2500% ( 25) 00:07:47.850 12502.252 - 12552.665: 11.6088% ( 31) 00:07:47.850 12552.665 - 12603.077: 12.1181% ( 44) 00:07:47.850 12603.077 - 12653.489: 12.4537% ( 29) 00:07:47.850 12653.489 - 12703.902: 12.8935% ( 38) 00:07:47.850 12703.902 - 12754.314: 13.2407% ( 30) 00:07:47.850 12754.314 - 12804.726: 13.4606% ( 19) 00:07:47.850 12804.726 - 12855.138: 13.7616% ( 26) 00:07:47.850 12855.138 - 12905.551: 14.2824% ( 45) 00:07:47.850 12905.551 - 13006.375: 15.6829% ( 121) 00:07:47.850 13006.375 - 13107.200: 17.6389% ( 169) 00:07:47.850 13107.200 - 13208.025: 19.2014% ( 135) 00:07:47.850 13208.025 - 13308.849: 20.7639% ( 135) 00:07:47.850 13308.849 - 13409.674: 22.5694% ( 156) 00:07:47.850 13409.674 - 13510.498: 24.8264% ( 195) 00:07:47.850 13510.498 - 13611.323: 27.3611% ( 219) 00:07:47.850 13611.323 - 13712.148: 29.3981% ( 176) 00:07:47.850 13712.148 - 13812.972: 31.8634% ( 213) 00:07:47.850 13812.972 - 13913.797: 35.8565% ( 345) 00:07:47.850 13913.797 - 14014.622: 39.2477% ( 293) 00:07:47.850 14014.622 - 14115.446: 42.9051% ( 316) 00:07:47.850 14115.446 - 14216.271: 46.4236% ( 304) 00:07:47.850 14216.271 - 14317.095: 50.0926% ( 317) 00:07:47.850 14317.095 - 14417.920: 53.7037% ( 312) 00:07:47.850 14417.920 - 14518.745: 57.2685% ( 308) 00:07:47.850 14518.745 - 14619.569: 60.7292% ( 299) 00:07:47.850 14619.569 - 14720.394: 63.9931% ( 282) 00:07:47.850 14720.394 - 14821.218: 66.9907% ( 259) 00:07:47.850 14821.218 - 14922.043: 69.2824% ( 198) 00:07:47.850 14922.043 - 15022.868: 71.6667% ( 206) 00:07:47.850 15022.868 - 15123.692: 74.2245% ( 221) 00:07:47.850 15123.692 - 15224.517: 75.9606% ( 150) 00:07:47.850 15224.517 - 15325.342: 77.3264% ( 118) 00:07:47.850 15325.342 - 15426.166: 79.0972% ( 153) 00:07:47.850 15426.166 - 15526.991: 80.6250% ( 132) 00:07:47.850 15526.991 - 15627.815: 81.8287% ( 104) 00:07:47.850 15627.815 - 15728.640: 82.9398% ( 96) 00:07:47.850 15728.640 - 15829.465: 83.6921% ( 65) 00:07:47.850 15829.465 - 15930.289: 84.1435% ( 39) 00:07:47.850 15930.289 - 16031.114: 84.5139% ( 32) 00:07:47.850 16031.114 - 16131.938: 85.1157% ( 52) 00:07:47.850 16131.938 - 16232.763: 85.8796% ( 66) 00:07:47.850 16232.763 - 16333.588: 86.3542% ( 41) 00:07:47.850 16333.588 - 16434.412: 86.7361% ( 33) 00:07:47.850 16434.412 - 16535.237: 87.0602% ( 28) 00:07:47.850 16535.237 - 16636.062: 87.4421% ( 33) 00:07:47.850 16636.062 - 16736.886: 88.0093% ( 49) 00:07:47.850 16736.886 - 16837.711: 88.4606% ( 39) 00:07:47.850 16837.711 - 16938.535: 88.8657% ( 35) 00:07:47.850 16938.535 - 17039.360: 89.4907% ( 54) 00:07:47.850 17039.360 - 17140.185: 89.9884% ( 43) 00:07:47.850 17140.185 - 17241.009: 90.3588% ( 32) 00:07:47.850 17241.009 - 17341.834: 90.9144% ( 48) 00:07:47.850 17341.834 - 17442.658: 91.2731% ( 31) 00:07:47.850 17442.658 - 17543.483: 91.7593% ( 42) 00:07:47.850 17543.483 - 17644.308: 92.4421% ( 59) 00:07:47.850 17644.308 - 17745.132: 92.9398% ( 43) 00:07:47.850 17745.132 - 17845.957: 93.3102% ( 32) 00:07:47.850 17845.957 - 17946.782: 93.6343% ( 28) 00:07:47.850 17946.782 - 18047.606: 93.9120% ( 24) 00:07:47.850 18047.606 - 18148.431: 94.1319% ( 19) 00:07:47.850 18148.431 - 18249.255: 94.3750% ( 21) 00:07:47.850 18249.255 - 18350.080: 94.7338% ( 31) 00:07:47.850 18350.080 - 18450.905: 95.0694% ( 29) 00:07:47.850 18450.905 - 18551.729: 95.3356% ( 23) 00:07:47.850 18551.729 - 18652.554: 95.5440% ( 18) 00:07:47.850 18652.554 - 18753.378: 95.6366% ( 8) 00:07:47.850 18753.378 - 18854.203: 95.6829% ( 4) 00:07:47.850 18854.203 - 18955.028: 95.7755% ( 8) 00:07:47.850 18955.028 - 19055.852: 95.8333% ( 5) 00:07:47.850 19055.852 - 19156.677: 96.0417% ( 18) 00:07:47.850 19156.677 - 19257.502: 96.2616% ( 19) 00:07:47.850 19257.502 - 19358.326: 96.5394% ( 24) 00:07:47.850 19358.326 - 19459.151: 96.6204% ( 7) 00:07:47.850 19459.151 - 19559.975: 96.6551% ( 3) 00:07:47.850 19559.975 - 19660.800: 96.6667% ( 1) 00:07:47.850 19660.800 - 19761.625: 96.6782% ( 1) 00:07:47.850 19761.625 - 19862.449: 96.7361% ( 5) 00:07:47.850 19862.449 - 19963.274: 96.7824% ( 4) 00:07:47.850 19963.274 - 20064.098: 96.8287% ( 4) 00:07:47.850 20064.098 - 20164.923: 96.8750% ( 4) 00:07:47.850 20164.923 - 20265.748: 96.9213% ( 4) 00:07:47.850 20265.748 - 20366.572: 96.9907% ( 6) 00:07:47.850 20366.572 - 20467.397: 97.0255% ( 3) 00:07:47.850 20467.397 - 20568.222: 97.0370% ( 1) 00:07:47.850 21878.942 - 21979.766: 97.1065% ( 6) 00:07:47.850 21979.766 - 22080.591: 97.1528% ( 4) 00:07:47.850 22080.591 - 22181.415: 97.1991% ( 4) 00:07:47.850 22181.415 - 22282.240: 97.2454% ( 4) 00:07:47.850 22282.240 - 22383.065: 97.2917% ( 4) 00:07:47.850 22383.065 - 22483.889: 97.3495% ( 5) 00:07:47.850 22483.889 - 22584.714: 97.4306% ( 7) 00:07:47.850 22584.714 - 22685.538: 97.4653% ( 3) 00:07:47.850 22685.538 - 22786.363: 97.5579% ( 8) 00:07:47.850 22786.363 - 22887.188: 97.8356% ( 24) 00:07:47.850 22887.188 - 22988.012: 98.0787% ( 21) 00:07:47.850 22988.012 - 23088.837: 98.2060% ( 11) 00:07:47.850 23088.837 - 23189.662: 98.2870% ( 7) 00:07:47.850 23189.662 - 23290.486: 98.3681% ( 7) 00:07:47.850 23290.486 - 23391.311: 98.4259% ( 5) 00:07:47.850 23391.311 - 23492.135: 98.4375% ( 1) 00:07:47.850 23492.135 - 23592.960: 98.4838% ( 4) 00:07:47.850 23592.960 - 23693.785: 98.5185% ( 3) 00:07:47.850 30449.034 - 30650.683: 98.5648% ( 4) 00:07:47.850 30650.683 - 30852.332: 98.6458% ( 7) 00:07:47.850 30852.332 - 31053.982: 98.7153% ( 6) 00:07:47.850 31053.982 - 31255.631: 98.8079% ( 8) 00:07:47.850 31255.631 - 31457.280: 98.8773% ( 6) 00:07:47.850 31457.280 - 31658.929: 98.9583% ( 7) 00:07:47.850 31658.929 - 31860.578: 99.0278% ( 6) 00:07:48.113 31860.578 - 32062.228: 99.0856% ( 5) 00:07:48.113 32062.228 - 32263.877: 99.1667% ( 7) 00:07:48.113 32263.877 - 32465.526: 99.2477% ( 7) 00:07:48.113 32465.526 - 32667.175: 99.2593% ( 1) 00:07:48.113 40128.197 - 40329.846: 99.2824% ( 2) 00:07:48.113 40329.846 - 40531.495: 99.3403% ( 5) 00:07:48.113 40531.495 - 40733.145: 99.3750% ( 3) 00:07:48.113 40733.145 - 40934.794: 99.4792% ( 9) 00:07:48.113 40934.794 - 41136.443: 99.5023% ( 2) 00:07:48.113 41136.443 - 41338.092: 99.5718% ( 6) 00:07:48.113 41338.092 - 41539.742: 99.6412% ( 6) 00:07:48.113 41539.742 - 41741.391: 99.6991% ( 5) 00:07:48.113 41741.391 - 41943.040: 99.7569% ( 5) 00:07:48.113 41943.040 - 42144.689: 99.8264% ( 6) 00:07:48.113 42144.689 - 42346.338: 99.8958% ( 6) 00:07:48.113 42346.338 - 42547.988: 99.9421% ( 4) 00:07:48.113 42547.988 - 42749.637: 100.0000% ( 5) 00:07:48.113 00:07:48.113 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:48.113 ============================================================================== 00:07:48.113 Range in us Cumulative IO count 00:07:48.113 10334.523 - 10384.935: 0.0116% ( 1) 00:07:48.113 10384.935 - 10435.348: 0.1157% ( 9) 00:07:48.114 10435.348 - 10485.760: 0.2662% ( 13) 00:07:48.114 10485.760 - 10536.172: 0.4051% ( 12) 00:07:48.114 10536.172 - 10586.585: 0.5556% ( 13) 00:07:48.114 10586.585 - 10636.997: 0.7523% ( 17) 00:07:48.114 10636.997 - 10687.409: 0.9144% ( 14) 00:07:48.114 10687.409 - 10737.822: 1.0532% ( 12) 00:07:48.114 10737.822 - 10788.234: 1.1574% ( 9) 00:07:48.114 10788.234 - 10838.646: 1.2963% ( 12) 00:07:48.114 10838.646 - 10889.058: 1.4583% ( 14) 00:07:48.114 10889.058 - 10939.471: 1.5509% ( 8) 00:07:48.114 10939.471 - 10989.883: 1.6204% ( 6) 00:07:48.114 10989.883 - 11040.295: 1.8056% ( 16) 00:07:48.114 11040.295 - 11090.708: 2.0139% ( 18) 00:07:48.114 11090.708 - 11141.120: 2.1412% ( 11) 00:07:48.114 11141.120 - 11191.532: 2.2454% ( 9) 00:07:48.114 11191.532 - 11241.945: 2.3843% ( 12) 00:07:48.114 11241.945 - 11292.357: 2.5579% ( 15) 00:07:48.114 11292.357 - 11342.769: 2.7778% ( 19) 00:07:48.114 11342.769 - 11393.182: 3.0787% ( 26) 00:07:48.114 11393.182 - 11443.594: 3.3218% ( 21) 00:07:48.114 11443.594 - 11494.006: 3.5417% ( 19) 00:07:48.114 11494.006 - 11544.418: 3.8889% ( 30) 00:07:48.114 11544.418 - 11594.831: 4.2014% ( 27) 00:07:48.114 11594.831 - 11645.243: 4.5486% ( 30) 00:07:48.114 11645.243 - 11695.655: 4.9884% ( 38) 00:07:48.114 11695.655 - 11746.068: 5.5556% ( 49) 00:07:48.114 11746.068 - 11796.480: 5.8796% ( 28) 00:07:48.114 11796.480 - 11846.892: 6.0880% ( 18) 00:07:48.114 11846.892 - 11897.305: 6.3194% ( 20) 00:07:48.114 11897.305 - 11947.717: 6.6782% ( 31) 00:07:48.114 11947.717 - 11998.129: 7.0949% ( 36) 00:07:48.114 11998.129 - 12048.542: 7.5116% ( 36) 00:07:48.114 12048.542 - 12098.954: 7.9630% ( 39) 00:07:48.114 12098.954 - 12149.366: 8.6227% ( 57) 00:07:48.114 12149.366 - 12199.778: 9.1667% ( 47) 00:07:48.114 12199.778 - 12250.191: 9.7801% ( 53) 00:07:48.114 12250.191 - 12300.603: 10.2894% ( 44) 00:07:48.114 12300.603 - 12351.015: 10.8449% ( 48) 00:07:48.114 12351.015 - 12401.428: 11.3310% ( 42) 00:07:48.114 12401.428 - 12451.840: 11.7477% ( 36) 00:07:48.114 12451.840 - 12502.252: 12.0949% ( 30) 00:07:48.114 12502.252 - 12552.665: 12.3495% ( 22) 00:07:48.114 12552.665 - 12603.077: 12.5694% ( 19) 00:07:48.114 12603.077 - 12653.489: 12.8588% ( 25) 00:07:48.114 12653.489 - 12703.902: 13.2176% ( 31) 00:07:48.114 12703.902 - 12754.314: 13.5301% ( 27) 00:07:48.114 12754.314 - 12804.726: 13.7731% ( 21) 00:07:48.114 12804.726 - 12855.138: 13.9931% ( 19) 00:07:48.114 12855.138 - 12905.551: 14.2708% ( 24) 00:07:48.114 12905.551 - 13006.375: 14.9190% ( 56) 00:07:48.114 13006.375 - 13107.200: 15.7870% ( 75) 00:07:48.114 13107.200 - 13208.025: 17.3727% ( 137) 00:07:48.114 13208.025 - 13308.849: 19.3750% ( 173) 00:07:48.114 13308.849 - 13409.674: 21.3889% ( 174) 00:07:48.114 13409.674 - 13510.498: 23.3796% ( 172) 00:07:48.114 13510.498 - 13611.323: 25.2546% ( 162) 00:07:48.114 13611.323 - 13712.148: 27.3727% ( 183) 00:07:48.114 13712.148 - 13812.972: 29.4329% ( 178) 00:07:48.114 13812.972 - 13913.797: 32.4190% ( 258) 00:07:48.114 13913.797 - 14014.622: 36.4815% ( 351) 00:07:48.114 14014.622 - 14115.446: 40.6366% ( 359) 00:07:48.114 14115.446 - 14216.271: 44.8843% ( 367) 00:07:48.114 14216.271 - 14317.095: 48.9468% ( 351) 00:07:48.114 14317.095 - 14417.920: 52.8819% ( 340) 00:07:48.114 14417.920 - 14518.745: 57.3148% ( 383) 00:07:48.114 14518.745 - 14619.569: 60.5208% ( 277) 00:07:48.114 14619.569 - 14720.394: 64.1782% ( 316) 00:07:48.114 14720.394 - 14821.218: 67.1181% ( 254) 00:07:48.114 14821.218 - 14922.043: 70.1968% ( 266) 00:07:48.114 14922.043 - 15022.868: 72.8935% ( 233) 00:07:48.114 15022.868 - 15123.692: 75.3472% ( 212) 00:07:48.114 15123.692 - 15224.517: 77.1875% ( 159) 00:07:48.114 15224.517 - 15325.342: 79.0394% ( 160) 00:07:48.114 15325.342 - 15426.166: 80.4745% ( 124) 00:07:48.114 15426.166 - 15526.991: 81.8056% ( 115) 00:07:48.114 15526.991 - 15627.815: 83.0093% ( 104) 00:07:48.114 15627.815 - 15728.640: 83.9699% ( 83) 00:07:48.114 15728.640 - 15829.465: 84.6644% ( 60) 00:07:48.114 15829.465 - 15930.289: 85.1389% ( 41) 00:07:48.114 15930.289 - 16031.114: 85.4398% ( 26) 00:07:48.114 16031.114 - 16131.938: 85.7292% ( 25) 00:07:48.114 16131.938 - 16232.763: 86.0648% ( 29) 00:07:48.114 16232.763 - 16333.588: 86.5741% ( 44) 00:07:48.114 16333.588 - 16434.412: 87.2685% ( 60) 00:07:48.114 16434.412 - 16535.237: 87.6389% ( 32) 00:07:48.114 16535.237 - 16636.062: 88.3218% ( 59) 00:07:48.114 16636.062 - 16736.886: 88.7963% ( 41) 00:07:48.114 16736.886 - 16837.711: 89.2708% ( 41) 00:07:48.114 16837.711 - 16938.535: 89.7801% ( 44) 00:07:48.114 16938.535 - 17039.360: 90.2546% ( 41) 00:07:48.114 17039.360 - 17140.185: 90.5440% ( 25) 00:07:48.114 17140.185 - 17241.009: 90.8912% ( 30) 00:07:48.114 17241.009 - 17341.834: 91.3426% ( 39) 00:07:48.114 17341.834 - 17442.658: 91.8056% ( 40) 00:07:48.114 17442.658 - 17543.483: 92.1991% ( 34) 00:07:48.114 17543.483 - 17644.308: 92.8935% ( 60) 00:07:48.114 17644.308 - 17745.132: 93.2292% ( 29) 00:07:48.114 17745.132 - 17845.957: 93.5880% ( 31) 00:07:48.114 17845.957 - 17946.782: 93.9236% ( 29) 00:07:48.114 17946.782 - 18047.606: 94.2477% ( 28) 00:07:48.114 18047.606 - 18148.431: 94.5602% ( 27) 00:07:48.114 18148.431 - 18249.255: 94.7454% ( 16) 00:07:48.114 18249.255 - 18350.080: 94.9421% ( 17) 00:07:48.114 18350.080 - 18450.905: 95.2546% ( 27) 00:07:48.114 18450.905 - 18551.729: 95.3588% ( 9) 00:07:48.114 18551.729 - 18652.554: 95.4282% ( 6) 00:07:48.114 18652.554 - 18753.378: 95.4977% ( 6) 00:07:48.114 18753.378 - 18854.203: 95.5324% ( 3) 00:07:48.114 18854.203 - 18955.028: 95.5556% ( 2) 00:07:48.114 19459.151 - 19559.975: 95.5787% ( 2) 00:07:48.114 19559.975 - 19660.800: 95.6944% ( 10) 00:07:48.114 19660.800 - 19761.625: 95.8565% ( 14) 00:07:48.114 19761.625 - 19862.449: 96.3426% ( 42) 00:07:48.114 19862.449 - 19963.274: 96.5278% ( 16) 00:07:48.114 19963.274 - 20064.098: 96.6667% ( 12) 00:07:48.114 20064.098 - 20164.923: 96.7824% ( 10) 00:07:48.114 20164.923 - 20265.748: 96.8403% ( 5) 00:07:48.114 20265.748 - 20366.572: 96.8981% ( 5) 00:07:48.114 20366.572 - 20467.397: 96.9560% ( 5) 00:07:48.114 20467.397 - 20568.222: 97.0139% ( 5) 00:07:48.114 20568.222 - 20669.046: 97.0370% ( 2) 00:07:48.114 21173.169 - 21273.994: 97.1065% ( 6) 00:07:48.114 21273.994 - 21374.818: 97.1644% ( 5) 00:07:48.114 21374.818 - 21475.643: 97.2454% ( 7) 00:07:48.114 21475.643 - 21576.468: 97.3148% ( 6) 00:07:48.114 21576.468 - 21677.292: 97.3843% ( 6) 00:07:48.114 21677.292 - 21778.117: 97.4537% ( 6) 00:07:48.114 21778.117 - 21878.942: 97.5116% ( 5) 00:07:48.114 21878.942 - 21979.766: 97.5694% ( 5) 00:07:48.114 21979.766 - 22080.591: 97.6042% ( 3) 00:07:48.114 22080.591 - 22181.415: 97.6620% ( 5) 00:07:48.114 22181.415 - 22282.240: 97.7315% ( 6) 00:07:48.114 22282.240 - 22383.065: 97.7778% ( 4) 00:07:48.114 22887.188 - 22988.012: 97.7894% ( 1) 00:07:48.114 22988.012 - 23088.837: 97.8241% ( 3) 00:07:48.114 23088.837 - 23189.662: 97.8819% ( 5) 00:07:48.114 23189.662 - 23290.486: 98.0787% ( 17) 00:07:48.114 23290.486 - 23391.311: 98.3565% ( 24) 00:07:48.114 23391.311 - 23492.135: 98.4606% ( 9) 00:07:48.114 23492.135 - 23592.960: 98.5185% ( 5) 00:07:48.114 28835.840 - 29037.489: 98.5301% ( 1) 00:07:48.114 29037.489 - 29239.138: 98.5995% ( 6) 00:07:48.114 29239.138 - 29440.788: 98.6806% ( 7) 00:07:48.114 29440.788 - 29642.437: 98.7731% ( 8) 00:07:48.114 29642.437 - 29844.086: 98.8426% ( 6) 00:07:48.114 29844.086 - 30045.735: 98.9352% ( 8) 00:07:48.114 30045.735 - 30247.385: 99.0046% ( 6) 00:07:48.114 30247.385 - 30449.034: 99.0972% ( 8) 00:07:48.114 30449.034 - 30650.683: 99.1782% ( 7) 00:07:48.114 30650.683 - 30852.332: 99.2593% ( 7) 00:07:48.114 38515.003 - 38716.652: 99.3171% ( 5) 00:07:48.114 38716.652 - 38918.302: 99.3750% ( 5) 00:07:48.114 38918.302 - 39119.951: 99.4444% ( 6) 00:07:48.114 39119.951 - 39321.600: 99.5023% ( 5) 00:07:48.114 39321.600 - 39523.249: 99.5718% ( 6) 00:07:48.114 39523.249 - 39724.898: 99.6296% ( 5) 00:07:48.114 39724.898 - 39926.548: 99.6991% ( 6) 00:07:48.114 39926.548 - 40128.197: 99.7454% ( 4) 00:07:48.114 40128.197 - 40329.846: 99.8148% ( 6) 00:07:48.114 40329.846 - 40531.495: 99.8843% ( 6) 00:07:48.114 40531.495 - 40733.145: 99.9537% ( 6) 00:07:48.114 40733.145 - 40934.794: 100.0000% ( 4) 00:07:48.114 00:07:48.114 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:48.114 ============================================================================== 00:07:48.114 Range in us Cumulative IO count 00:07:48.114 9981.637 - 10032.049: 0.0347% ( 3) 00:07:48.115 10032.049 - 10082.462: 0.0694% ( 3) 00:07:48.115 10082.462 - 10132.874: 0.1389% ( 6) 00:07:48.115 10132.874 - 10183.286: 0.2199% ( 7) 00:07:48.115 10183.286 - 10233.698: 0.3009% ( 7) 00:07:48.115 10233.698 - 10284.111: 0.4861% ( 16) 00:07:48.115 10284.111 - 10334.523: 0.5440% ( 5) 00:07:48.115 10334.523 - 10384.935: 0.6019% ( 5) 00:07:48.115 10384.935 - 10435.348: 0.6944% ( 8) 00:07:48.115 10435.348 - 10485.760: 0.7639% ( 6) 00:07:48.115 10485.760 - 10536.172: 0.7986% ( 3) 00:07:48.115 10536.172 - 10586.585: 0.8565% ( 5) 00:07:48.115 10586.585 - 10636.997: 0.9491% ( 8) 00:07:48.115 10636.997 - 10687.409: 1.0880% ( 12) 00:07:48.115 10687.409 - 10737.822: 1.1921% ( 9) 00:07:48.115 10737.822 - 10788.234: 1.3889% ( 17) 00:07:48.115 10788.234 - 10838.646: 1.5856% ( 17) 00:07:48.115 10838.646 - 10889.058: 1.7361% ( 13) 00:07:48.115 10889.058 - 10939.471: 1.9213% ( 16) 00:07:48.115 10939.471 - 10989.883: 2.3032% ( 33) 00:07:48.115 10989.883 - 11040.295: 2.5000% ( 17) 00:07:48.115 11040.295 - 11090.708: 2.6852% ( 16) 00:07:48.115 11090.708 - 11141.120: 2.8588% ( 15) 00:07:48.115 11141.120 - 11191.532: 3.0556% ( 17) 00:07:48.115 11191.532 - 11241.945: 3.2870% ( 20) 00:07:48.115 11241.945 - 11292.357: 3.5880% ( 26) 00:07:48.115 11292.357 - 11342.769: 3.8310% ( 21) 00:07:48.115 11342.769 - 11393.182: 4.1088% ( 24) 00:07:48.115 11393.182 - 11443.594: 4.3750% ( 23) 00:07:48.115 11443.594 - 11494.006: 4.7917% ( 36) 00:07:48.115 11494.006 - 11544.418: 5.0347% ( 21) 00:07:48.115 11544.418 - 11594.831: 5.3472% ( 27) 00:07:48.115 11594.831 - 11645.243: 5.7060% ( 31) 00:07:48.115 11645.243 - 11695.655: 6.0764% ( 32) 00:07:48.115 11695.655 - 11746.068: 6.4352% ( 31) 00:07:48.115 11746.068 - 11796.480: 6.7593% ( 28) 00:07:48.115 11796.480 - 11846.892: 7.0255% ( 23) 00:07:48.115 11846.892 - 11897.305: 7.2454% ( 19) 00:07:48.115 11897.305 - 11947.717: 7.4653% ( 19) 00:07:48.115 11947.717 - 11998.129: 7.7315% ( 23) 00:07:48.115 11998.129 - 12048.542: 8.1019% ( 32) 00:07:48.115 12048.542 - 12098.954: 8.4722% ( 32) 00:07:48.115 12098.954 - 12149.366: 8.6921% ( 19) 00:07:48.115 12149.366 - 12199.778: 8.8426% ( 13) 00:07:48.115 12199.778 - 12250.191: 9.0509% ( 18) 00:07:48.115 12250.191 - 12300.603: 9.3056% ( 22) 00:07:48.115 12300.603 - 12351.015: 9.7685% ( 40) 00:07:48.115 12351.015 - 12401.428: 10.3588% ( 51) 00:07:48.115 12401.428 - 12451.840: 10.9144% ( 48) 00:07:48.115 12451.840 - 12502.252: 11.7361% ( 71) 00:07:48.115 12502.252 - 12552.665: 12.1412% ( 35) 00:07:48.115 12552.665 - 12603.077: 12.6505% ( 44) 00:07:48.115 12603.077 - 12653.489: 13.1481% ( 43) 00:07:48.115 12653.489 - 12703.902: 13.5648% ( 36) 00:07:48.115 12703.902 - 12754.314: 14.1435% ( 50) 00:07:48.115 12754.314 - 12804.726: 14.6991% ( 48) 00:07:48.115 12804.726 - 12855.138: 15.1389% ( 38) 00:07:48.115 12855.138 - 12905.551: 15.5787% ( 38) 00:07:48.115 12905.551 - 13006.375: 16.4815% ( 78) 00:07:48.115 13006.375 - 13107.200: 17.3495% ( 75) 00:07:48.115 13107.200 - 13208.025: 18.6921% ( 116) 00:07:48.115 13208.025 - 13308.849: 19.9421% ( 108) 00:07:48.115 13308.849 - 13409.674: 21.8403% ( 164) 00:07:48.115 13409.674 - 13510.498: 23.3796% ( 133) 00:07:48.115 13510.498 - 13611.323: 25.8565% ( 214) 00:07:48.115 13611.323 - 13712.148: 28.1829% ( 201) 00:07:48.115 13712.148 - 13812.972: 30.7755% ( 224) 00:07:48.115 13812.972 - 13913.797: 34.6875% ( 338) 00:07:48.115 13913.797 - 14014.622: 39.1435% ( 385) 00:07:48.115 14014.622 - 14115.446: 43.2986% ( 359) 00:07:48.115 14115.446 - 14216.271: 46.9213% ( 313) 00:07:48.115 14216.271 - 14317.095: 51.4815% ( 394) 00:07:48.115 14317.095 - 14417.920: 54.8495% ( 291) 00:07:48.115 14417.920 - 14518.745: 58.1597% ( 286) 00:07:48.115 14518.745 - 14619.569: 61.5278% ( 291) 00:07:48.115 14619.569 - 14720.394: 64.3750% ( 246) 00:07:48.115 14720.394 - 14821.218: 67.2222% ( 246) 00:07:48.115 14821.218 - 14922.043: 69.8727% ( 229) 00:07:48.115 14922.043 - 15022.868: 72.1065% ( 193) 00:07:48.115 15022.868 - 15123.692: 74.4560% ( 203) 00:07:48.115 15123.692 - 15224.517: 77.0255% ( 222) 00:07:48.115 15224.517 - 15325.342: 79.0856% ( 178) 00:07:48.115 15325.342 - 15426.166: 80.3704% ( 111) 00:07:48.115 15426.166 - 15526.991: 81.5046% ( 98) 00:07:48.115 15526.991 - 15627.815: 82.4306% ( 80) 00:07:48.115 15627.815 - 15728.640: 83.3102% ( 76) 00:07:48.115 15728.640 - 15829.465: 83.8889% ( 50) 00:07:48.115 15829.465 - 15930.289: 84.3287% ( 38) 00:07:48.115 15930.289 - 16031.114: 84.9421% ( 53) 00:07:48.115 16031.114 - 16131.938: 85.4051% ( 40) 00:07:48.115 16131.938 - 16232.763: 85.9722% ( 49) 00:07:48.115 16232.763 - 16333.588: 86.4352% ( 40) 00:07:48.115 16333.588 - 16434.412: 87.2222% ( 68) 00:07:48.115 16434.412 - 16535.237: 87.6042% ( 33) 00:07:48.115 16535.237 - 16636.062: 88.2292% ( 54) 00:07:48.115 16636.062 - 16736.886: 89.0162% ( 68) 00:07:48.115 16736.886 - 16837.711: 89.3866% ( 32) 00:07:48.115 16837.711 - 16938.535: 89.8843% ( 43) 00:07:48.115 16938.535 - 17039.360: 90.1968% ( 27) 00:07:48.115 17039.360 - 17140.185: 90.4861% ( 25) 00:07:48.115 17140.185 - 17241.009: 90.6713% ( 16) 00:07:48.115 17241.009 - 17341.834: 90.7986% ( 11) 00:07:48.115 17341.834 - 17442.658: 90.9491% ( 13) 00:07:48.115 17442.658 - 17543.483: 91.1921% ( 21) 00:07:48.115 17543.483 - 17644.308: 91.6435% ( 39) 00:07:48.115 17644.308 - 17745.132: 91.8981% ( 22) 00:07:48.115 17745.132 - 17845.957: 92.2685% ( 32) 00:07:48.115 17845.957 - 17946.782: 92.6852% ( 36) 00:07:48.115 17946.782 - 18047.606: 93.2870% ( 52) 00:07:48.115 18047.606 - 18148.431: 93.6806% ( 34) 00:07:48.115 18148.431 - 18249.255: 94.2014% ( 45) 00:07:48.115 18249.255 - 18350.080: 94.4792% ( 24) 00:07:48.115 18350.080 - 18450.905: 94.7106% ( 20) 00:07:48.115 18450.905 - 18551.729: 94.8727% ( 14) 00:07:48.115 18551.729 - 18652.554: 94.9884% ( 10) 00:07:48.115 18652.554 - 18753.378: 95.0463% ( 5) 00:07:48.115 18753.378 - 18854.203: 95.1042% ( 5) 00:07:48.115 18854.203 - 18955.028: 95.2315% ( 11) 00:07:48.115 18955.028 - 19055.852: 95.3819% ( 13) 00:07:48.115 19055.852 - 19156.677: 95.4861% ( 9) 00:07:48.115 19156.677 - 19257.502: 95.5440% ( 5) 00:07:48.115 19257.502 - 19358.326: 95.5556% ( 1) 00:07:48.115 19358.326 - 19459.151: 95.5671% ( 1) 00:07:48.115 19459.151 - 19559.975: 95.6481% ( 7) 00:07:48.115 19559.975 - 19660.800: 95.7407% ( 8) 00:07:48.115 19660.800 - 19761.625: 95.7986% ( 5) 00:07:48.115 19761.625 - 19862.449: 95.8681% ( 6) 00:07:48.115 19862.449 - 19963.274: 95.9491% ( 7) 00:07:48.115 19963.274 - 20064.098: 96.0648% ( 10) 00:07:48.115 20064.098 - 20164.923: 96.2037% ( 12) 00:07:48.115 20164.923 - 20265.748: 96.3426% ( 12) 00:07:48.115 20265.748 - 20366.572: 96.4583% ( 10) 00:07:48.115 20366.572 - 20467.397: 96.5972% ( 12) 00:07:48.115 20467.397 - 20568.222: 96.7824% ( 16) 00:07:48.115 20568.222 - 20669.046: 96.9444% ( 14) 00:07:48.115 20669.046 - 20769.871: 97.1875% ( 21) 00:07:48.115 20769.871 - 20870.695: 97.4537% ( 23) 00:07:48.115 20870.695 - 20971.520: 97.6273% ( 15) 00:07:48.115 20971.520 - 21072.345: 97.7546% ( 11) 00:07:48.115 21072.345 - 21173.169: 97.7662% ( 1) 00:07:48.115 21173.169 - 21273.994: 97.7778% ( 1) 00:07:48.115 22685.538 - 22786.363: 97.7894% ( 1) 00:07:48.115 22786.363 - 22887.188: 97.8588% ( 6) 00:07:48.115 22887.188 - 22988.012: 97.9398% ( 7) 00:07:48.115 22988.012 - 23088.837: 97.9977% ( 5) 00:07:48.115 23088.837 - 23189.662: 98.0556% ( 5) 00:07:48.115 23189.662 - 23290.486: 98.1250% ( 6) 00:07:48.115 23290.486 - 23391.311: 98.1829% ( 5) 00:07:48.115 23391.311 - 23492.135: 98.2523% ( 6) 00:07:48.115 23492.135 - 23592.960: 98.3102% ( 5) 00:07:48.115 23592.960 - 23693.785: 98.3796% ( 6) 00:07:48.115 23693.785 - 23794.609: 98.4375% ( 5) 00:07:48.115 23794.609 - 23895.434: 98.5069% ( 6) 00:07:48.115 23895.434 - 23996.258: 98.5185% ( 1) 00:07:48.115 28432.542 - 28634.191: 98.5301% ( 1) 00:07:48.115 28634.191 - 28835.840: 98.5995% ( 6) 00:07:48.115 28835.840 - 29037.489: 98.6806% ( 7) 00:07:48.115 29037.489 - 29239.138: 98.7616% ( 7) 00:07:48.115 29239.138 - 29440.788: 98.8426% ( 7) 00:07:48.115 29440.788 - 29642.437: 98.9236% ( 7) 00:07:48.115 29642.437 - 29844.086: 99.0046% ( 7) 00:07:48.115 29844.086 - 30045.735: 99.0856% ( 7) 00:07:48.115 30045.735 - 30247.385: 99.1667% ( 7) 00:07:48.115 30247.385 - 30449.034: 99.2477% ( 7) 00:07:48.115 30449.034 - 30650.683: 99.2593% ( 1) 00:07:48.115 38111.705 - 38313.354: 99.3056% ( 4) 00:07:48.115 38313.354 - 38515.003: 99.3634% ( 5) 00:07:48.115 38515.003 - 38716.652: 99.4213% ( 5) 00:07:48.115 38716.652 - 38918.302: 99.4792% ( 5) 00:07:48.115 38918.302 - 39119.951: 99.5486% ( 6) 00:07:48.115 39119.951 - 39321.600: 99.6181% ( 6) 00:07:48.115 39321.600 - 39523.249: 99.6991% ( 7) 00:07:48.115 39523.249 - 39724.898: 99.7685% ( 6) 00:07:48.115 39724.898 - 39926.548: 99.8380% ( 6) 00:07:48.116 39926.548 - 40128.197: 99.9074% ( 6) 00:07:48.116 40128.197 - 40329.846: 99.9769% ( 6) 00:07:48.116 40329.846 - 40531.495: 100.0000% ( 2) 00:07:48.116 00:07:48.116 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:48.116 ============================================================================== 00:07:48.116 Range in us Cumulative IO count 00:07:48.116 9931.225 - 9981.637: 0.0231% ( 2) 00:07:48.116 9981.637 - 10032.049: 0.0579% ( 3) 00:07:48.116 10032.049 - 10082.462: 0.1620% ( 9) 00:07:48.116 10082.462 - 10132.874: 0.2315% ( 6) 00:07:48.116 10132.874 - 10183.286: 0.4514% ( 19) 00:07:48.116 10183.286 - 10233.698: 0.5208% ( 6) 00:07:48.116 10233.698 - 10284.111: 0.5787% ( 5) 00:07:48.116 10284.111 - 10334.523: 0.6019% ( 2) 00:07:48.116 10334.523 - 10384.935: 0.6366% ( 3) 00:07:48.116 10384.935 - 10435.348: 0.6713% ( 3) 00:07:48.116 10435.348 - 10485.760: 0.6944% ( 2) 00:07:48.116 10485.760 - 10536.172: 0.7176% ( 2) 00:07:48.116 10536.172 - 10586.585: 0.7407% ( 2) 00:07:48.116 10586.585 - 10636.997: 0.7986% ( 5) 00:07:48.116 10636.997 - 10687.409: 0.8333% ( 3) 00:07:48.116 10687.409 - 10737.822: 0.8796% ( 4) 00:07:48.116 10737.822 - 10788.234: 1.0069% ( 11) 00:07:48.116 10788.234 - 10838.646: 1.1574% ( 13) 00:07:48.116 10838.646 - 10889.058: 1.4352% ( 24) 00:07:48.116 10889.058 - 10939.471: 1.6204% ( 16) 00:07:48.116 10939.471 - 10989.883: 1.8287% ( 18) 00:07:48.116 10989.883 - 11040.295: 2.1644% ( 29) 00:07:48.116 11040.295 - 11090.708: 2.4653% ( 26) 00:07:48.116 11090.708 - 11141.120: 2.7778% ( 27) 00:07:48.116 11141.120 - 11191.532: 3.1366% ( 31) 00:07:48.116 11191.532 - 11241.945: 3.5069% ( 32) 00:07:48.116 11241.945 - 11292.357: 3.9931% ( 42) 00:07:48.116 11292.357 - 11342.769: 4.5370% ( 47) 00:07:48.116 11342.769 - 11393.182: 4.9769% ( 38) 00:07:48.116 11393.182 - 11443.594: 5.3472% ( 32) 00:07:48.116 11443.594 - 11494.006: 5.7639% ( 36) 00:07:48.116 11494.006 - 11544.418: 6.1227% ( 31) 00:07:48.116 11544.418 - 11594.831: 6.4120% ( 25) 00:07:48.116 11594.831 - 11645.243: 6.6551% ( 21) 00:07:48.116 11645.243 - 11695.655: 6.9097% ( 22) 00:07:48.116 11695.655 - 11746.068: 7.0370% ( 11) 00:07:48.116 11746.068 - 11796.480: 7.1875% ( 13) 00:07:48.116 11796.480 - 11846.892: 7.3727% ( 16) 00:07:48.116 11846.892 - 11897.305: 7.5347% ( 14) 00:07:48.116 11897.305 - 11947.717: 7.8356% ( 26) 00:07:48.116 11947.717 - 11998.129: 8.2060% ( 32) 00:07:48.116 11998.129 - 12048.542: 8.7500% ( 47) 00:07:48.116 12048.542 - 12098.954: 9.3056% ( 48) 00:07:48.116 12098.954 - 12149.366: 9.8032% ( 43) 00:07:48.116 12149.366 - 12199.778: 10.0810% ( 24) 00:07:48.116 12199.778 - 12250.191: 10.4514% ( 32) 00:07:48.116 12250.191 - 12300.603: 10.7407% ( 25) 00:07:48.116 12300.603 - 12351.015: 11.0301% ( 25) 00:07:48.116 12351.015 - 12401.428: 11.2269% ( 17) 00:07:48.116 12401.428 - 12451.840: 11.4005% ( 15) 00:07:48.116 12451.840 - 12502.252: 11.5972% ( 17) 00:07:48.116 12502.252 - 12552.665: 11.8287% ( 20) 00:07:48.116 12552.665 - 12603.077: 12.0833% ( 22) 00:07:48.116 12603.077 - 12653.489: 12.4306% ( 30) 00:07:48.116 12653.489 - 12703.902: 12.8241% ( 34) 00:07:48.116 12703.902 - 12754.314: 13.1481% ( 28) 00:07:48.116 12754.314 - 12804.726: 13.5880% ( 38) 00:07:48.116 12804.726 - 12855.138: 13.9236% ( 29) 00:07:48.116 12855.138 - 12905.551: 14.4097% ( 42) 00:07:48.116 12905.551 - 13006.375: 15.3356% ( 80) 00:07:48.116 13006.375 - 13107.200: 16.3194% ( 85) 00:07:48.116 13107.200 - 13208.025: 17.7083% ( 120) 00:07:48.116 13208.025 - 13308.849: 19.7222% ( 174) 00:07:48.116 13308.849 - 13409.674: 21.6319% ( 165) 00:07:48.116 13409.674 - 13510.498: 23.7500% ( 183) 00:07:48.116 13510.498 - 13611.323: 26.3079% ( 221) 00:07:48.116 13611.323 - 13712.148: 28.8773% ( 222) 00:07:48.116 13712.148 - 13812.972: 31.2963% ( 209) 00:07:48.116 13812.972 - 13913.797: 34.5602% ( 282) 00:07:48.116 13913.797 - 14014.622: 38.1944% ( 314) 00:07:48.116 14014.622 - 14115.446: 42.3843% ( 362) 00:07:48.116 14115.446 - 14216.271: 47.1296% ( 410) 00:07:48.116 14216.271 - 14317.095: 51.5278% ( 380) 00:07:48.116 14317.095 - 14417.920: 55.3472% ( 330) 00:07:48.116 14417.920 - 14518.745: 59.6412% ( 371) 00:07:48.116 14518.745 - 14619.569: 62.7778% ( 271) 00:07:48.116 14619.569 - 14720.394: 65.6366% ( 247) 00:07:48.116 14720.394 - 14821.218: 68.7269% ( 267) 00:07:48.116 14821.218 - 14922.043: 71.2037% ( 214) 00:07:48.116 14922.043 - 15022.868: 73.0093% ( 156) 00:07:48.116 15022.868 - 15123.692: 74.8264% ( 157) 00:07:48.116 15123.692 - 15224.517: 76.4352% ( 139) 00:07:48.116 15224.517 - 15325.342: 78.0324% ( 138) 00:07:48.116 15325.342 - 15426.166: 79.3403% ( 113) 00:07:48.116 15426.166 - 15526.991: 80.1389% ( 69) 00:07:48.116 15526.991 - 15627.815: 81.0185% ( 76) 00:07:48.116 15627.815 - 15728.640: 81.7361% ( 62) 00:07:48.116 15728.640 - 15829.465: 82.7431% ( 87) 00:07:48.116 15829.465 - 15930.289: 83.7153% ( 84) 00:07:48.116 15930.289 - 16031.114: 84.4444% ( 63) 00:07:48.116 16031.114 - 16131.938: 84.9190% ( 41) 00:07:48.116 16131.938 - 16232.763: 85.4514% ( 46) 00:07:48.116 16232.763 - 16333.588: 85.7870% ( 29) 00:07:48.116 16333.588 - 16434.412: 86.1458% ( 31) 00:07:48.116 16434.412 - 16535.237: 86.6204% ( 41) 00:07:48.116 16535.237 - 16636.062: 87.2106% ( 51) 00:07:48.116 16636.062 - 16736.886: 87.7431% ( 46) 00:07:48.116 16736.886 - 16837.711: 88.3218% ( 50) 00:07:48.116 16837.711 - 16938.535: 88.7616% ( 38) 00:07:48.116 16938.535 - 17039.360: 89.3866% ( 54) 00:07:48.116 17039.360 - 17140.185: 89.9306% ( 47) 00:07:48.116 17140.185 - 17241.009: 90.5556% ( 54) 00:07:48.116 17241.009 - 17341.834: 91.2384% ( 59) 00:07:48.116 17341.834 - 17442.658: 91.6898% ( 39) 00:07:48.116 17442.658 - 17543.483: 92.2338% ( 47) 00:07:48.116 17543.483 - 17644.308: 92.7083% ( 41) 00:07:48.116 17644.308 - 17745.132: 93.1366% ( 37) 00:07:48.116 17745.132 - 17845.957: 93.4375% ( 26) 00:07:48.116 17845.957 - 17946.782: 93.6574% ( 19) 00:07:48.116 17946.782 - 18047.606: 93.7616% ( 9) 00:07:48.116 18047.606 - 18148.431: 93.8310% ( 6) 00:07:48.116 18148.431 - 18249.255: 93.8889% ( 5) 00:07:48.116 18249.255 - 18350.080: 93.9699% ( 7) 00:07:48.116 18350.080 - 18450.905: 94.1204% ( 13) 00:07:48.116 18450.905 - 18551.729: 94.2708% ( 13) 00:07:48.116 18551.729 - 18652.554: 94.4213% ( 13) 00:07:48.116 18652.554 - 18753.378: 94.5486% ( 11) 00:07:48.116 18753.378 - 18854.203: 94.9653% ( 36) 00:07:48.116 18854.203 - 18955.028: 95.1620% ( 17) 00:07:48.116 18955.028 - 19055.852: 95.4167% ( 22) 00:07:48.116 19055.852 - 19156.677: 95.7060% ( 25) 00:07:48.116 19156.677 - 19257.502: 95.9838% ( 24) 00:07:48.116 19257.502 - 19358.326: 96.1458% ( 14) 00:07:48.116 19358.326 - 19459.151: 96.2269% ( 7) 00:07:48.116 19459.151 - 19559.975: 96.2847% ( 5) 00:07:48.116 19559.975 - 19660.800: 96.3657% ( 7) 00:07:48.116 19660.800 - 19761.625: 96.4699% ( 9) 00:07:48.116 19761.625 - 19862.449: 96.6898% ( 19) 00:07:48.116 19862.449 - 19963.274: 96.8519% ( 14) 00:07:48.116 19963.274 - 20064.098: 96.9097% ( 5) 00:07:48.116 20064.098 - 20164.923: 96.9676% ( 5) 00:07:48.116 20164.923 - 20265.748: 97.0255% ( 5) 00:07:48.116 20265.748 - 20366.572: 97.0370% ( 1) 00:07:48.116 20669.046 - 20769.871: 97.0602% ( 2) 00:07:48.116 20769.871 - 20870.695: 97.1644% ( 9) 00:07:48.116 20870.695 - 20971.520: 97.2222% ( 5) 00:07:48.116 20971.520 - 21072.345: 97.3148% ( 8) 00:07:48.116 21072.345 - 21173.169: 97.5000% ( 16) 00:07:48.116 21173.169 - 21273.994: 97.5810% ( 7) 00:07:48.116 21273.994 - 21374.818: 97.6505% ( 6) 00:07:48.116 21374.818 - 21475.643: 97.7083% ( 5) 00:07:48.116 21475.643 - 21576.468: 97.7662% ( 5) 00:07:48.116 21576.468 - 21677.292: 97.7778% ( 1) 00:07:48.116 22383.065 - 22483.889: 97.7894% ( 1) 00:07:48.116 22483.889 - 22584.714: 97.8588% ( 6) 00:07:48.116 22584.714 - 22685.538: 97.9282% ( 6) 00:07:48.116 22685.538 - 22786.363: 98.0440% ( 10) 00:07:48.116 22786.363 - 22887.188: 98.1597% ( 10) 00:07:48.116 22887.188 - 22988.012: 98.2407% ( 7) 00:07:48.116 22988.012 - 23088.837: 98.2986% ( 5) 00:07:48.116 23088.837 - 23189.662: 98.3449% ( 4) 00:07:48.116 23189.662 - 23290.486: 98.4028% ( 5) 00:07:48.116 23290.486 - 23391.311: 98.4491% ( 4) 00:07:48.116 23391.311 - 23492.135: 98.5069% ( 5) 00:07:48.116 23492.135 - 23592.960: 98.5185% ( 1) 00:07:48.116 27424.295 - 27625.945: 98.5880% ( 6) 00:07:48.116 27625.945 - 27827.594: 98.6690% ( 7) 00:07:48.116 27827.594 - 28029.243: 98.7500% ( 7) 00:07:48.116 28029.243 - 28230.892: 98.8426% ( 8) 00:07:48.116 28230.892 - 28432.542: 98.9236% ( 7) 00:07:48.116 28432.542 - 28634.191: 99.0046% ( 7) 00:07:48.116 28634.191 - 28835.840: 99.0856% ( 7) 00:07:48.116 28835.840 - 29037.489: 99.1667% ( 7) 00:07:48.116 29037.489 - 29239.138: 99.2477% ( 7) 00:07:48.116 29239.138 - 29440.788: 99.2593% ( 1) 00:07:48.117 36498.511 - 36700.160: 99.2824% ( 2) 00:07:48.117 36700.160 - 36901.809: 99.3519% ( 6) 00:07:48.117 36901.809 - 37103.458: 99.4213% ( 6) 00:07:48.117 37103.458 - 37305.108: 99.4907% ( 6) 00:07:48.117 37305.108 - 37506.757: 99.5718% ( 7) 00:07:48.117 37506.757 - 37708.406: 99.6412% ( 6) 00:07:48.117 37708.406 - 37910.055: 99.7106% ( 6) 00:07:48.117 37910.055 - 38111.705: 99.7917% ( 7) 00:07:48.117 38111.705 - 38313.354: 99.8611% ( 6) 00:07:48.117 38313.354 - 38515.003: 99.9306% ( 6) 00:07:48.117 38515.003 - 38716.652: 100.0000% ( 6) 00:07:48.117 00:07:48.117 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:48.117 ============================================================================== 00:07:48.117 Range in us Cumulative IO count 00:07:48.117 10384.935 - 10435.348: 0.0115% ( 1) 00:07:48.117 10485.760 - 10536.172: 0.0230% ( 1) 00:07:48.117 10536.172 - 10586.585: 0.0574% ( 3) 00:07:48.117 10586.585 - 10636.997: 0.1264% ( 6) 00:07:48.117 10636.997 - 10687.409: 0.2183% ( 8) 00:07:48.117 10687.409 - 10737.822: 0.3676% ( 13) 00:07:48.117 10737.822 - 10788.234: 0.5055% ( 12) 00:07:48.117 10788.234 - 10838.646: 0.6434% ( 12) 00:07:48.117 10838.646 - 10889.058: 0.9191% ( 24) 00:07:48.117 10889.058 - 10939.471: 1.1374% ( 19) 00:07:48.117 10939.471 - 10989.883: 1.4131% ( 24) 00:07:48.117 10989.883 - 11040.295: 1.5855% ( 15) 00:07:48.117 11040.295 - 11090.708: 1.7923% ( 18) 00:07:48.117 11090.708 - 11141.120: 2.1599% ( 32) 00:07:48.117 11141.120 - 11191.532: 2.5391% ( 33) 00:07:48.117 11191.532 - 11241.945: 3.0446% ( 44) 00:07:48.117 11241.945 - 11292.357: 3.4237% ( 33) 00:07:48.117 11292.357 - 11342.769: 3.7339% ( 27) 00:07:48.117 11342.769 - 11393.182: 4.1475% ( 36) 00:07:48.117 11393.182 - 11443.594: 4.8139% ( 58) 00:07:48.117 11443.594 - 11494.006: 5.3309% ( 45) 00:07:48.117 11494.006 - 11544.418: 5.8134% ( 42) 00:07:48.117 11544.418 - 11594.831: 6.2960% ( 42) 00:07:48.117 11594.831 - 11645.243: 6.9393% ( 56) 00:07:48.117 11645.243 - 11695.655: 7.3300% ( 34) 00:07:48.117 11695.655 - 11746.068: 7.9389% ( 53) 00:07:48.117 11746.068 - 11796.480: 8.2721% ( 29) 00:07:48.117 11796.480 - 11846.892: 8.5248% ( 22) 00:07:48.117 11846.892 - 11897.305: 8.7661% ( 21) 00:07:48.117 11897.305 - 11947.717: 8.9614% ( 17) 00:07:48.117 11947.717 - 11998.129: 9.1567% ( 17) 00:07:48.117 11998.129 - 12048.542: 9.3405% ( 16) 00:07:48.117 12048.542 - 12098.954: 9.5244% ( 16) 00:07:48.117 12098.954 - 12149.366: 9.6622% ( 12) 00:07:48.117 12149.366 - 12199.778: 9.8001% ( 12) 00:07:48.117 12199.778 - 12250.191: 10.0299% ( 20) 00:07:48.117 12250.191 - 12300.603: 10.2482% ( 19) 00:07:48.117 12300.603 - 12351.015: 10.5124% ( 23) 00:07:48.117 12351.015 - 12401.428: 10.9030% ( 34) 00:07:48.117 12401.428 - 12451.840: 11.1213% ( 19) 00:07:48.117 12451.840 - 12502.252: 11.3741% ( 22) 00:07:48.117 12502.252 - 12552.665: 11.6268% ( 22) 00:07:48.117 12552.665 - 12603.077: 11.8107% ( 16) 00:07:48.117 12603.077 - 12653.489: 12.0060% ( 17) 00:07:48.117 12653.489 - 12703.902: 12.2013% ( 17) 00:07:48.117 12703.902 - 12754.314: 12.4311% ( 20) 00:07:48.117 12754.314 - 12804.726: 12.8102% ( 33) 00:07:48.117 12804.726 - 12855.138: 13.2468% ( 38) 00:07:48.117 12855.138 - 12905.551: 13.7523% ( 44) 00:07:48.117 12905.551 - 13006.375: 14.9472% ( 104) 00:07:48.117 13006.375 - 13107.200: 16.5097% ( 136) 00:07:48.117 13107.200 - 13208.025: 17.9917% ( 129) 00:07:48.117 13208.025 - 13308.849: 20.1976% ( 192) 00:07:48.117 13308.849 - 13409.674: 22.2197% ( 176) 00:07:48.117 13409.674 - 13510.498: 24.2992% ( 181) 00:07:48.117 13510.498 - 13611.323: 26.6774% ( 207) 00:07:48.117 13611.323 - 13712.148: 29.4118% ( 238) 00:07:48.117 13712.148 - 13812.972: 32.2036% ( 243) 00:07:48.117 13812.972 - 13913.797: 35.6503% ( 300) 00:07:48.117 13913.797 - 14014.622: 39.6714% ( 350) 00:07:48.117 14014.622 - 14115.446: 43.7155% ( 352) 00:07:48.117 14115.446 - 14216.271: 47.2426% ( 307) 00:07:48.117 14216.271 - 14317.095: 50.6434% ( 296) 00:07:48.117 14317.095 - 14417.920: 53.9752% ( 290) 00:07:48.117 14417.920 - 14518.745: 57.2151% ( 282) 00:07:48.117 14518.745 - 14619.569: 60.0184% ( 244) 00:07:48.117 14619.569 - 14720.394: 63.3272% ( 288) 00:07:48.117 14720.394 - 14821.218: 66.4407% ( 271) 00:07:48.117 14821.218 - 14922.043: 69.5083% ( 267) 00:07:48.117 14922.043 - 15022.868: 72.5988% ( 269) 00:07:48.117 15022.868 - 15123.692: 75.8157% ( 280) 00:07:48.117 15123.692 - 15224.517: 78.1480% ( 203) 00:07:48.117 15224.517 - 15325.342: 79.6760% ( 133) 00:07:48.117 15325.342 - 15426.166: 81.0662% ( 121) 00:07:48.117 15426.166 - 15526.991: 81.9393% ( 76) 00:07:48.117 15526.991 - 15627.815: 82.7321% ( 69) 00:07:48.117 15627.815 - 15728.640: 83.2950% ( 49) 00:07:48.117 15728.640 - 15829.465: 83.9269% ( 55) 00:07:48.117 15829.465 - 15930.289: 84.4095% ( 42) 00:07:48.117 15930.289 - 16031.114: 84.7426% ( 29) 00:07:48.117 16031.114 - 16131.938: 85.2022% ( 40) 00:07:48.117 16131.938 - 16232.763: 85.8686% ( 58) 00:07:48.117 16232.763 - 16333.588: 86.4890% ( 54) 00:07:48.117 16333.588 - 16434.412: 86.7647% ( 24) 00:07:48.117 16434.412 - 16535.237: 87.0634% ( 26) 00:07:48.117 16535.237 - 16636.062: 87.3392% ( 24) 00:07:48.117 16636.062 - 16736.886: 87.6379% ( 26) 00:07:48.117 16736.886 - 16837.711: 88.0515% ( 36) 00:07:48.117 16837.711 - 16938.535: 88.5915% ( 47) 00:07:48.117 16938.535 - 17039.360: 89.1314% ( 47) 00:07:48.117 17039.360 - 17140.185: 89.6599% ( 46) 00:07:48.117 17140.185 - 17241.009: 90.1654% ( 44) 00:07:48.117 17241.009 - 17341.834: 90.7054% ( 47) 00:07:48.117 17341.834 - 17442.658: 91.3258% ( 54) 00:07:48.117 17442.658 - 17543.483: 91.8888% ( 49) 00:07:48.117 17543.483 - 17644.308: 92.3828% ( 43) 00:07:48.117 17644.308 - 17745.132: 92.8768% ( 43) 00:07:48.117 17745.132 - 17845.957: 93.3249% ( 39) 00:07:48.117 17845.957 - 17946.782: 93.6696% ( 30) 00:07:48.117 17946.782 - 18047.606: 93.8534% ( 16) 00:07:48.117 18047.606 - 18148.431: 93.9798% ( 11) 00:07:48.117 18148.431 - 18249.255: 94.0832% ( 9) 00:07:48.117 18249.255 - 18350.080: 94.1981% ( 10) 00:07:48.117 18350.080 - 18450.905: 94.3244% ( 11) 00:07:48.117 18450.905 - 18551.729: 94.4853% ( 14) 00:07:48.117 18551.729 - 18652.554: 94.6691% ( 16) 00:07:48.117 18652.554 - 18753.378: 94.8300% ( 14) 00:07:48.117 18753.378 - 18854.203: 95.0023% ( 15) 00:07:48.117 18854.203 - 18955.028: 95.2206% ( 19) 00:07:48.117 18955.028 - 19055.852: 95.4389% ( 19) 00:07:48.117 19055.852 - 19156.677: 95.6572% ( 19) 00:07:48.117 19156.677 - 19257.502: 95.8640% ( 18) 00:07:48.117 19257.502 - 19358.326: 96.1397% ( 24) 00:07:48.117 19358.326 - 19459.151: 96.4844% ( 30) 00:07:48.117 19459.151 - 19559.975: 96.7256% ( 21) 00:07:48.117 19559.975 - 19660.800: 96.9554% ( 20) 00:07:48.117 19660.800 - 19761.625: 97.3116% ( 31) 00:07:48.117 19761.625 - 19862.449: 97.5069% ( 17) 00:07:48.117 19862.449 - 19963.274: 97.6562% ( 13) 00:07:48.117 19963.274 - 20064.098: 97.7597% ( 9) 00:07:48.117 20064.098 - 20164.923: 97.7941% ( 3) 00:07:48.117 21475.643 - 21576.468: 97.8056% ( 1) 00:07:48.117 21576.468 - 21677.292: 97.8516% ( 4) 00:07:48.117 21677.292 - 21778.117: 97.9205% ( 6) 00:07:48.117 21778.117 - 21878.942: 98.0239% ( 9) 00:07:48.117 21878.942 - 21979.766: 98.4030% ( 33) 00:07:48.117 21979.766 - 22080.591: 98.5869% ( 16) 00:07:48.117 22080.591 - 22181.415: 98.7017% ( 10) 00:07:48.117 22181.415 - 22282.240: 98.8051% ( 9) 00:07:48.117 22282.240 - 22383.065: 98.8856% ( 7) 00:07:48.117 22383.065 - 22483.889: 98.9660% ( 7) 00:07:48.117 22483.889 - 22584.714: 99.0234% ( 5) 00:07:48.117 22584.714 - 22685.538: 99.0809% ( 5) 00:07:48.117 22685.538 - 22786.363: 99.1383% ( 5) 00:07:48.117 22786.363 - 22887.188: 99.1843% ( 4) 00:07:48.117 22887.188 - 22988.012: 99.2417% ( 5) 00:07:48.117 22988.012 - 23088.837: 99.2647% ( 2) 00:07:48.117 27020.997 - 27222.646: 99.2762% ( 1) 00:07:48.117 27222.646 - 27424.295: 99.3451% ( 6) 00:07:48.117 27424.295 - 27625.945: 99.4370% ( 8) 00:07:48.117 27625.945 - 27827.594: 99.5060% ( 6) 00:07:48.117 27827.594 - 28029.243: 99.5864% ( 7) 00:07:48.117 28029.243 - 28230.892: 99.6668% ( 7) 00:07:48.117 28230.892 - 28432.542: 99.7472% ( 7) 00:07:48.117 28432.542 - 28634.191: 99.8162% ( 6) 00:07:48.117 28634.191 - 28835.840: 99.9081% ( 8) 00:07:48.117 28835.840 - 29037.489: 99.9885% ( 7) 00:07:48.117 29037.489 - 29239.138: 100.0000% ( 1) 00:07:48.117 00:07:48.117 18:17:06 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:07:48.117 00:07:48.117 real 0m2.549s 00:07:48.117 user 0m2.229s 00:07:48.117 sys 0m0.206s 00:07:48.117 18:17:06 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:48.117 ************************************ 00:07:48.117 END TEST nvme_perf 00:07:48.117 ************************************ 00:07:48.117 18:17:06 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:07:48.118 18:17:06 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:48.118 18:17:06 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:48.118 18:17:06 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:48.118 18:17:06 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:48.118 ************************************ 00:07:48.118 START TEST nvme_hello_world 00:07:48.118 ************************************ 00:07:48.118 18:17:06 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:48.379 Initializing NVMe Controllers 00:07:48.379 Attached to 0000:00:13.0 00:07:48.379 Namespace ID: 1 size: 1GB 00:07:48.379 Attached to 0000:00:10.0 00:07:48.379 Namespace ID: 1 size: 6GB 00:07:48.379 Attached to 0000:00:11.0 00:07:48.379 Namespace ID: 1 size: 5GB 00:07:48.379 Attached to 0000:00:12.0 00:07:48.379 Namespace ID: 1 size: 4GB 00:07:48.379 Namespace ID: 2 size: 4GB 00:07:48.379 Namespace ID: 3 size: 4GB 00:07:48.379 Initialization complete. 00:07:48.379 INFO: using host memory buffer for IO 00:07:48.379 Hello world! 00:07:48.379 INFO: using host memory buffer for IO 00:07:48.379 Hello world! 00:07:48.379 INFO: using host memory buffer for IO 00:07:48.379 Hello world! 00:07:48.379 INFO: using host memory buffer for IO 00:07:48.379 Hello world! 00:07:48.379 INFO: using host memory buffer for IO 00:07:48.379 Hello world! 00:07:48.379 INFO: using host memory buffer for IO 00:07:48.379 Hello world! 00:07:48.379 00:07:48.379 real 0m0.250s 00:07:48.379 user 0m0.091s 00:07:48.379 sys 0m0.113s 00:07:48.379 ************************************ 00:07:48.379 END TEST nvme_hello_world 00:07:48.379 ************************************ 00:07:48.379 18:17:06 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:48.379 18:17:06 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:48.379 18:17:06 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:48.379 18:17:06 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:48.379 18:17:06 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:48.379 18:17:06 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:48.379 ************************************ 00:07:48.379 START TEST nvme_sgl 00:07:48.379 ************************************ 00:07:48.380 18:17:06 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:48.641 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:07:48.641 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:07:48.641 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:07:48.641 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:07:48.641 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:07:48.641 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:07:48.641 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:07:48.641 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:07:48.641 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:07:48.641 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:07:48.641 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:07:48.641 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:07:48.641 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:07:48.641 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:07:48.641 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:07:48.641 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:07:48.641 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:07:48.641 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:07:48.641 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:07:48.641 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:07:48.641 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:07:48.641 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:07:48.641 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:07:48.641 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:07:48.641 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:07:48.641 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:07:48.641 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:07:48.641 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:07:48.641 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:07:48.641 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:07:48.641 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:07:48.642 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:07:48.642 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:07:48.642 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:07:48.642 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:07:48.642 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:07:48.642 NVMe Readv/Writev Request test 00:07:48.642 Attached to 0000:00:13.0 00:07:48.642 Attached to 0000:00:10.0 00:07:48.642 Attached to 0000:00:11.0 00:07:48.642 Attached to 0000:00:12.0 00:07:48.642 0000:00:10.0: build_io_request_2 test passed 00:07:48.642 0000:00:10.0: build_io_request_4 test passed 00:07:48.642 0000:00:10.0: build_io_request_5 test passed 00:07:48.642 0000:00:10.0: build_io_request_6 test passed 00:07:48.642 0000:00:10.0: build_io_request_7 test passed 00:07:48.642 0000:00:10.0: build_io_request_10 test passed 00:07:48.642 0000:00:11.0: build_io_request_2 test passed 00:07:48.642 0000:00:11.0: build_io_request_4 test passed 00:07:48.642 0000:00:11.0: build_io_request_5 test passed 00:07:48.642 0000:00:11.0: build_io_request_6 test passed 00:07:48.642 0000:00:11.0: build_io_request_7 test passed 00:07:48.642 0000:00:11.0: build_io_request_10 test passed 00:07:48.642 Cleaning up... 00:07:48.642 00:07:48.642 real 0m0.358s 00:07:48.642 user 0m0.179s 00:07:48.642 sys 0m0.131s 00:07:48.642 18:17:07 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:48.642 ************************************ 00:07:48.642 END TEST nvme_sgl 00:07:48.642 ************************************ 00:07:48.642 18:17:07 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:07:49.091 18:17:07 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:49.091 18:17:07 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:49.091 18:17:07 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:49.091 18:17:07 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:49.091 ************************************ 00:07:49.091 START TEST nvme_e2edp 00:07:49.091 ************************************ 00:07:49.091 18:17:07 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:49.091 NVMe Write/Read with End-to-End data protection test 00:07:49.091 Attached to 0000:00:13.0 00:07:49.091 Attached to 0000:00:10.0 00:07:49.091 Attached to 0000:00:11.0 00:07:49.091 Attached to 0000:00:12.0 00:07:49.091 Cleaning up... 00:07:49.351 00:07:49.351 real 0m0.232s 00:07:49.351 user 0m0.074s 00:07:49.351 sys 0m0.107s 00:07:49.351 ************************************ 00:07:49.351 END TEST nvme_e2edp 00:07:49.351 ************************************ 00:07:49.351 18:17:07 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:49.351 18:17:07 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:07:49.351 18:17:07 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:49.351 18:17:07 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:49.351 18:17:07 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:49.351 18:17:07 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:49.351 ************************************ 00:07:49.351 START TEST nvme_reserve 00:07:49.351 ************************************ 00:07:49.351 18:17:07 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:49.611 ===================================================== 00:07:49.611 NVMe Controller at PCI bus 0, device 19, function 0 00:07:49.611 ===================================================== 00:07:49.611 Reservations: Not Supported 00:07:49.611 ===================================================== 00:07:49.611 NVMe Controller at PCI bus 0, device 16, function 0 00:07:49.611 ===================================================== 00:07:49.611 Reservations: Not Supported 00:07:49.611 ===================================================== 00:07:49.611 NVMe Controller at PCI bus 0, device 17, function 0 00:07:49.611 ===================================================== 00:07:49.611 Reservations: Not Supported 00:07:49.611 ===================================================== 00:07:49.611 NVMe Controller at PCI bus 0, device 18, function 0 00:07:49.611 ===================================================== 00:07:49.611 Reservations: Not Supported 00:07:49.611 Reservation test passed 00:07:49.612 00:07:49.612 real 0m0.234s 00:07:49.612 user 0m0.074s 00:07:49.612 sys 0m0.100s 00:07:49.612 ************************************ 00:07:49.612 END TEST nvme_reserve 00:07:49.612 ************************************ 00:07:49.612 18:17:08 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:49.612 18:17:08 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:07:49.612 18:17:08 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:49.612 18:17:08 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:49.612 18:17:08 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:49.612 18:17:08 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:49.612 ************************************ 00:07:49.612 START TEST nvme_err_injection 00:07:49.612 ************************************ 00:07:49.612 18:17:08 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:49.873 NVMe Error Injection test 00:07:49.873 Attached to 0000:00:13.0 00:07:49.873 Attached to 0000:00:10.0 00:07:49.873 Attached to 0000:00:11.0 00:07:49.873 Attached to 0000:00:12.0 00:07:49.873 0000:00:13.0: get features failed as expected 00:07:49.873 0000:00:10.0: get features failed as expected 00:07:49.873 0000:00:11.0: get features failed as expected 00:07:49.873 0000:00:12.0: get features failed as expected 00:07:49.873 0000:00:13.0: get features successfully as expected 00:07:49.873 0000:00:10.0: get features successfully as expected 00:07:49.873 0000:00:11.0: get features successfully as expected 00:07:49.873 0000:00:12.0: get features successfully as expected 00:07:49.873 0000:00:12.0: read failed as expected 00:07:49.873 0000:00:13.0: read failed as expected 00:07:49.873 0000:00:10.0: read failed as expected 00:07:49.873 0000:00:11.0: read failed as expected 00:07:49.873 0000:00:12.0: read successfully as expected 00:07:49.873 0000:00:13.0: read successfully as expected 00:07:49.873 0000:00:10.0: read successfully as expected 00:07:49.873 0000:00:11.0: read successfully as expected 00:07:49.873 Cleaning up... 00:07:49.873 00:07:49.873 real 0m0.246s 00:07:49.873 user 0m0.084s 00:07:49.873 sys 0m0.117s 00:07:49.873 ************************************ 00:07:49.873 END TEST nvme_err_injection 00:07:49.873 ************************************ 00:07:49.873 18:17:08 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:49.873 18:17:08 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:07:49.873 18:17:08 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:49.873 18:17:08 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:07:49.873 18:17:08 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:49.873 18:17:08 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:49.873 ************************************ 00:07:49.873 START TEST nvme_overhead 00:07:49.873 ************************************ 00:07:49.873 18:17:08 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:51.255 Initializing NVMe Controllers 00:07:51.255 Attached to 0000:00:13.0 00:07:51.255 Attached to 0000:00:10.0 00:07:51.255 Attached to 0000:00:11.0 00:07:51.255 Attached to 0000:00:12.0 00:07:51.255 Initialization complete. Launching workers. 00:07:51.255 submit (in ns) avg, min, max = 13920.4, 9957.7, 304617.7 00:07:51.255 complete (in ns) avg, min, max = 9012.6, 7408.5, 341993.1 00:07:51.255 00:07:51.255 Submit histogram 00:07:51.255 ================ 00:07:51.255 Range in us Cumulative Count 00:07:51.255 9.945 - 9.994: 0.0154% ( 1) 00:07:51.255 9.994 - 10.043: 0.0307% ( 1) 00:07:51.255 10.142 - 10.191: 0.0461% ( 1) 00:07:51.255 10.338 - 10.388: 0.0768% ( 2) 00:07:51.255 11.077 - 11.126: 0.1076% ( 2) 00:07:51.255 11.126 - 11.175: 0.4610% ( 23) 00:07:51.255 11.175 - 11.225: 1.3061% ( 55) 00:07:51.255 11.225 - 11.274: 3.2883% ( 129) 00:07:51.255 11.274 - 11.323: 6.8685% ( 233) 00:07:51.255 11.323 - 11.372: 11.4321% ( 297) 00:07:51.255 11.372 - 11.422: 16.4874% ( 329) 00:07:51.255 11.422 - 11.471: 21.6195% ( 334) 00:07:51.255 11.471 - 11.520: 26.1063% ( 292) 00:07:51.255 11.520 - 11.569: 29.1641% ( 199) 00:07:51.255 11.569 - 11.618: 30.8851% ( 112) 00:07:51.255 11.618 - 11.668: 32.6675% ( 116) 00:07:51.255 11.668 - 11.717: 33.9736% ( 85) 00:07:51.255 11.717 - 11.766: 35.0184% ( 68) 00:07:51.255 11.766 - 11.815: 35.7253% ( 46) 00:07:51.255 11.815 - 11.865: 36.6472% ( 60) 00:07:51.255 11.865 - 11.914: 37.3540% ( 46) 00:07:51.255 11.914 - 11.963: 37.8304% ( 31) 00:07:51.255 11.963 - 12.012: 38.2606% ( 28) 00:07:51.255 12.012 - 12.062: 38.6447% ( 25) 00:07:51.255 12.062 - 12.111: 39.0289% ( 25) 00:07:51.255 12.111 - 12.160: 39.2440% ( 14) 00:07:51.255 12.160 - 12.209: 39.4438% ( 13) 00:07:51.255 12.209 - 12.258: 39.5667% ( 8) 00:07:51.255 12.258 - 12.308: 39.6281% ( 4) 00:07:51.255 12.308 - 12.357: 39.7203% ( 6) 00:07:51.255 12.357 - 12.406: 39.7511% ( 2) 00:07:51.255 12.406 - 12.455: 39.8586% ( 7) 00:07:51.255 12.455 - 12.505: 39.8740% ( 1) 00:07:51.255 12.505 - 12.554: 39.9355% ( 4) 00:07:51.255 12.554 - 12.603: 39.9969% ( 4) 00:07:51.255 12.603 - 12.702: 40.1199% ( 8) 00:07:51.255 12.702 - 12.800: 40.4118% ( 19) 00:07:51.255 12.800 - 12.898: 41.0111% ( 39) 00:07:51.255 12.898 - 12.997: 41.8715% ( 56) 00:07:51.255 12.997 - 13.095: 42.9010% ( 67) 00:07:51.255 13.095 - 13.194: 43.9152% ( 66) 00:07:51.255 13.194 - 13.292: 44.9140% ( 65) 00:07:51.255 13.292 - 13.391: 46.0203% ( 72) 00:07:51.255 13.391 - 13.489: 47.2342% ( 79) 00:07:51.255 13.489 - 13.588: 48.0332% ( 52) 00:07:51.255 13.588 - 13.686: 48.8322% ( 52) 00:07:51.255 13.686 - 13.785: 49.5083% ( 44) 00:07:51.255 13.785 - 13.883: 50.2919% ( 51) 00:07:51.255 13.883 - 13.982: 50.9988% ( 46) 00:07:51.255 13.982 - 14.080: 51.9053% ( 59) 00:07:51.255 14.080 - 14.178: 53.7646% ( 121) 00:07:51.255 14.178 - 14.277: 57.0221% ( 212) 00:07:51.255 14.277 - 14.375: 61.9699% ( 322) 00:07:51.255 14.375 - 14.474: 67.5630% ( 364) 00:07:51.255 14.474 - 14.572: 71.9730% ( 287) 00:07:51.255 14.572 - 14.671: 74.9693% ( 195) 00:07:51.255 14.671 - 14.769: 76.7363% ( 115) 00:07:51.255 14.769 - 14.868: 77.9502% ( 79) 00:07:51.255 14.868 - 14.966: 78.5802% ( 41) 00:07:51.255 14.966 - 15.065: 79.1795% ( 39) 00:07:51.255 15.065 - 15.163: 80.5931% ( 92) 00:07:51.255 15.163 - 15.262: 81.9146% ( 86) 00:07:51.255 15.262 - 15.360: 83.4358% ( 99) 00:07:51.255 15.360 - 15.458: 84.5575% ( 73) 00:07:51.255 15.458 - 15.557: 85.9250% ( 89) 00:07:51.255 15.557 - 15.655: 87.6152% ( 110) 00:07:51.255 15.655 - 15.754: 89.4591% ( 120) 00:07:51.255 15.754 - 15.852: 90.7498% ( 84) 00:07:51.255 15.852 - 15.951: 91.6411% ( 58) 00:07:51.255 15.951 - 16.049: 92.1635% ( 34) 00:07:51.255 16.049 - 16.148: 92.7320% ( 37) 00:07:51.255 16.148 - 16.246: 93.0240% ( 19) 00:07:51.255 16.246 - 16.345: 93.3006% ( 18) 00:07:51.255 16.345 - 16.443: 93.5157% ( 14) 00:07:51.255 16.443 - 16.542: 93.7462% ( 15) 00:07:51.255 16.542 - 16.640: 93.8691% ( 8) 00:07:51.255 16.640 - 16.738: 93.9613% ( 6) 00:07:51.255 16.738 - 16.837: 94.0381% ( 5) 00:07:51.255 16.837 - 16.935: 94.1149% ( 5) 00:07:51.255 16.935 - 17.034: 94.1918% ( 5) 00:07:51.255 17.034 - 17.132: 94.3147% ( 8) 00:07:51.255 17.132 - 17.231: 94.4069% ( 6) 00:07:51.255 17.231 - 17.329: 94.4837% ( 5) 00:07:51.255 17.329 - 17.428: 94.5144% ( 2) 00:07:51.255 17.428 - 17.526: 94.6066% ( 6) 00:07:51.255 17.526 - 17.625: 94.6988% ( 6) 00:07:51.255 17.625 - 17.723: 94.7603% ( 4) 00:07:51.255 17.723 - 17.822: 94.8679% ( 7) 00:07:51.255 17.822 - 17.920: 95.0061% ( 9) 00:07:51.255 17.920 - 18.018: 95.1291% ( 8) 00:07:51.255 18.018 - 18.117: 95.2520% ( 8) 00:07:51.255 18.117 - 18.215: 95.3749% ( 8) 00:07:51.255 18.215 - 18.314: 95.5286% ( 10) 00:07:51.255 18.314 - 18.412: 95.5900% ( 4) 00:07:51.255 18.412 - 18.511: 95.6669% ( 5) 00:07:51.255 18.511 - 18.609: 95.7744% ( 7) 00:07:51.255 18.609 - 18.708: 95.8513% ( 5) 00:07:51.255 18.708 - 18.806: 95.9127% ( 4) 00:07:51.255 18.806 - 18.905: 95.9588% ( 3) 00:07:51.255 18.905 - 19.003: 96.0356% ( 5) 00:07:51.255 19.003 - 19.102: 96.1278% ( 6) 00:07:51.255 19.102 - 19.200: 96.2508% ( 8) 00:07:51.255 19.200 - 19.298: 96.3430% ( 6) 00:07:51.255 19.298 - 19.397: 96.3737% ( 2) 00:07:51.255 19.397 - 19.495: 96.4198% ( 3) 00:07:51.255 19.495 - 19.594: 96.4966% ( 5) 00:07:51.255 19.594 - 19.692: 96.5427% ( 3) 00:07:51.255 19.692 - 19.791: 96.5888% ( 3) 00:07:51.255 19.791 - 19.889: 96.7117% ( 8) 00:07:51.255 19.889 - 19.988: 96.7425% ( 2) 00:07:51.255 19.988 - 20.086: 96.8347% ( 6) 00:07:51.255 20.086 - 20.185: 96.8961% ( 4) 00:07:51.255 20.185 - 20.283: 96.9269% ( 2) 00:07:51.255 20.283 - 20.382: 96.9422% ( 1) 00:07:51.255 20.382 - 20.480: 96.9576% ( 1) 00:07:51.255 20.480 - 20.578: 97.0344% ( 5) 00:07:51.255 20.578 - 20.677: 97.0805% ( 3) 00:07:51.255 20.677 - 20.775: 97.0959% ( 1) 00:07:51.256 20.874 - 20.972: 97.1112% ( 1) 00:07:51.256 21.071 - 21.169: 97.1573% ( 3) 00:07:51.256 21.169 - 21.268: 97.1881% ( 2) 00:07:51.256 21.268 - 21.366: 97.2034% ( 1) 00:07:51.256 21.366 - 21.465: 97.2495% ( 3) 00:07:51.256 21.465 - 21.563: 97.2956% ( 3) 00:07:51.256 21.563 - 21.662: 97.4032% ( 7) 00:07:51.256 21.662 - 21.760: 97.4339% ( 2) 00:07:51.256 21.760 - 21.858: 97.4493% ( 1) 00:07:51.256 21.858 - 21.957: 97.4800% ( 2) 00:07:51.256 21.957 - 22.055: 97.5415% ( 4) 00:07:51.256 22.055 - 22.154: 97.5722% ( 2) 00:07:51.256 22.154 - 22.252: 97.6183% ( 3) 00:07:51.256 22.252 - 22.351: 97.6798% ( 4) 00:07:51.256 22.351 - 22.449: 97.7105% ( 2) 00:07:51.256 22.449 - 22.548: 97.7412% ( 2) 00:07:51.256 22.548 - 22.646: 97.7720% ( 2) 00:07:51.256 22.646 - 22.745: 97.8027% ( 2) 00:07:51.256 22.745 - 22.843: 97.8949% ( 6) 00:07:51.256 22.843 - 22.942: 97.9410% ( 3) 00:07:51.256 22.942 - 23.040: 97.9717% ( 2) 00:07:51.256 23.040 - 23.138: 97.9871% ( 1) 00:07:51.256 23.138 - 23.237: 98.0486% ( 4) 00:07:51.256 23.237 - 23.335: 98.0639% ( 1) 00:07:51.256 23.335 - 23.434: 98.0947% ( 2) 00:07:51.256 23.532 - 23.631: 98.1100% ( 1) 00:07:51.256 23.631 - 23.729: 98.1254% ( 1) 00:07:51.256 23.828 - 23.926: 98.1561% ( 2) 00:07:51.256 23.926 - 24.025: 98.1868% ( 2) 00:07:51.256 24.025 - 24.123: 98.2329% ( 3) 00:07:51.256 24.123 - 24.222: 98.2483% ( 1) 00:07:51.256 24.320 - 24.418: 98.2637% ( 1) 00:07:51.256 24.812 - 24.911: 98.2790% ( 1) 00:07:51.256 25.108 - 25.206: 98.2944% ( 1) 00:07:51.256 26.191 - 26.388: 98.3098% ( 1) 00:07:51.256 27.372 - 27.569: 98.3251% ( 1) 00:07:51.256 27.766 - 27.963: 98.3405% ( 1) 00:07:51.256 28.357 - 28.554: 98.3559% ( 1) 00:07:51.256 30.326 - 30.523: 98.3712% ( 1) 00:07:51.256 30.917 - 31.114: 98.4327% ( 4) 00:07:51.256 31.114 - 31.311: 98.7093% ( 18) 00:07:51.256 31.311 - 31.508: 99.1395% ( 28) 00:07:51.256 31.508 - 31.705: 99.3546% ( 14) 00:07:51.256 31.705 - 31.902: 99.4929% ( 9) 00:07:51.256 31.902 - 32.098: 99.5851% ( 6) 00:07:51.256 32.098 - 32.295: 99.6312% ( 3) 00:07:51.256 32.295 - 32.492: 99.6773% ( 3) 00:07:51.256 32.492 - 32.689: 99.6927% ( 1) 00:07:51.256 32.689 - 32.886: 99.7234% ( 2) 00:07:51.256 33.083 - 33.280: 99.7388% ( 1) 00:07:51.256 34.265 - 34.462: 99.7541% ( 1) 00:07:51.256 35.249 - 35.446: 99.7695% ( 1) 00:07:51.256 37.415 - 37.612: 99.7849% ( 1) 00:07:51.256 38.597 - 38.794: 99.8002% ( 1) 00:07:51.256 38.991 - 39.188: 99.8156% ( 1) 00:07:51.256 43.126 - 43.323: 99.8310% ( 1) 00:07:51.256 45.489 - 45.686: 99.8463% ( 1) 00:07:51.256 47.655 - 47.852: 99.8617% ( 1) 00:07:51.256 48.049 - 48.246: 99.8771% ( 1) 00:07:51.256 48.443 - 48.640: 99.8924% ( 1) 00:07:51.256 48.640 - 48.837: 99.9078% ( 1) 00:07:51.256 48.837 - 49.034: 99.9232% ( 1) 00:07:51.256 57.108 - 57.502: 99.9385% ( 1) 00:07:51.256 57.502 - 57.895: 99.9539% ( 1) 00:07:51.256 58.683 - 59.077: 99.9693% ( 1) 00:07:51.256 91.372 - 91.766: 99.9846% ( 1) 00:07:51.256 304.049 - 305.625: 100.0000% ( 1) 00:07:51.256 00:07:51.256 Complete histogram 00:07:51.256 ================== 00:07:51.256 Range in us Cumulative Count 00:07:51.256 7.385 - 7.434: 0.0154% ( 1) 00:07:51.256 7.434 - 7.483: 0.0922% ( 5) 00:07:51.257 7.483 - 7.532: 1.1524% ( 69) 00:07:51.257 7.532 - 7.582: 4.1334% ( 194) 00:07:51.257 7.582 - 7.631: 8.7738% ( 302) 00:07:51.257 7.631 - 7.680: 14.6128% ( 380) 00:07:51.257 7.680 - 7.729: 21.9115% ( 475) 00:07:51.257 7.729 - 7.778: 28.0424% ( 399) 00:07:51.257 7.778 - 7.828: 32.7904% ( 309) 00:07:51.257 7.828 - 7.877: 35.9865% ( 208) 00:07:51.257 7.877 - 7.926: 37.8457% ( 121) 00:07:51.257 7.926 - 7.975: 39.2747% ( 93) 00:07:51.257 7.975 - 8.025: 40.1813% ( 59) 00:07:51.257 8.025 - 8.074: 41.2569% ( 70) 00:07:51.257 8.074 - 8.123: 42.2557% ( 65) 00:07:51.257 8.123 - 8.172: 42.9318% ( 44) 00:07:51.257 8.172 - 8.222: 44.1149% ( 77) 00:07:51.257 8.222 - 8.271: 45.0215% ( 59) 00:07:51.257 8.271 - 8.320: 46.0049% ( 64) 00:07:51.257 8.320 - 8.369: 47.0037% ( 65) 00:07:51.257 8.369 - 8.418: 47.9103% ( 59) 00:07:51.257 8.418 - 8.468: 48.7861% ( 57) 00:07:51.257 8.468 - 8.517: 49.6312% ( 55) 00:07:51.257 8.517 - 8.566: 50.8144% ( 77) 00:07:51.257 8.566 - 8.615: 52.5353% ( 112) 00:07:51.257 8.615 - 8.665: 55.0553% ( 164) 00:07:51.257 8.665 - 8.714: 58.0055% ( 192) 00:07:51.257 8.714 - 8.763: 61.1094% ( 202) 00:07:51.257 8.763 - 8.812: 64.5667% ( 225) 00:07:51.257 8.812 - 8.862: 68.4696% ( 254) 00:07:51.257 8.862 - 8.911: 71.5734% ( 202) 00:07:51.257 8.911 - 8.960: 75.0922% ( 229) 00:07:51.257 8.960 - 9.009: 77.5507% ( 160) 00:07:51.257 9.009 - 9.058: 79.6097% ( 134) 00:07:51.257 9.058 - 9.108: 81.2692% ( 108) 00:07:51.257 9.108 - 9.157: 82.4831% ( 79) 00:07:51.257 9.157 - 9.206: 83.3897% ( 59) 00:07:51.257 9.206 - 9.255: 84.0043% ( 40) 00:07:51.257 9.255 - 9.305: 84.4192% ( 27) 00:07:51.257 9.305 - 9.354: 84.6958% ( 18) 00:07:51.257 9.354 - 9.403: 84.8187% ( 8) 00:07:51.257 9.403 - 9.452: 84.9570% ( 9) 00:07:51.257 9.452 - 9.502: 85.0492% ( 6) 00:07:51.257 9.502 - 9.551: 85.0645% ( 1) 00:07:51.257 9.551 - 9.600: 85.0953% ( 2) 00:07:51.257 9.600 - 9.649: 85.1260% ( 2) 00:07:51.257 9.797 - 9.846: 85.1414% ( 1) 00:07:51.257 9.945 - 9.994: 85.2028% ( 4) 00:07:51.257 10.043 - 10.092: 85.2950% ( 6) 00:07:51.257 10.092 - 10.142: 85.3258% ( 2) 00:07:51.257 10.142 - 10.191: 85.3872% ( 4) 00:07:51.257 10.191 - 10.240: 85.4487% ( 4) 00:07:51.257 10.240 - 10.289: 85.7714% ( 21) 00:07:51.257 10.289 - 10.338: 86.4014% ( 41) 00:07:51.257 10.338 - 10.388: 87.0467% ( 42) 00:07:51.257 10.388 - 10.437: 87.8765% ( 54) 00:07:51.257 10.437 - 10.486: 88.3835% ( 33) 00:07:51.257 10.486 - 10.535: 88.7677% ( 25) 00:07:51.257 10.535 - 10.585: 89.2440% ( 31) 00:07:51.257 10.585 - 10.634: 89.7357% ( 32) 00:07:51.257 10.634 - 10.683: 90.3657% ( 41) 00:07:51.257 10.683 - 10.732: 91.1186% ( 49) 00:07:51.257 10.732 - 10.782: 91.8869% ( 50) 00:07:51.257 10.782 - 10.831: 92.4708% ( 38) 00:07:51.257 10.831 - 10.880: 93.0547% ( 38) 00:07:51.257 10.880 - 10.929: 93.4388% ( 25) 00:07:51.257 10.929 - 10.978: 93.8537% ( 27) 00:07:51.257 10.978 - 11.028: 94.3915% ( 35) 00:07:51.257 11.028 - 11.077: 94.7603% ( 24) 00:07:51.257 11.077 - 11.126: 95.0369% ( 18) 00:07:51.257 11.126 - 11.175: 95.3903% ( 23) 00:07:51.257 11.175 - 11.225: 95.6361% ( 16) 00:07:51.257 11.225 - 11.274: 95.8974% ( 17) 00:07:51.257 11.274 - 11.323: 96.0510% ( 10) 00:07:51.257 11.323 - 11.372: 96.1893% ( 9) 00:07:51.257 11.372 - 11.422: 96.2661% ( 5) 00:07:51.257 11.422 - 11.471: 96.3122% ( 3) 00:07:51.257 11.471 - 11.520: 96.3891% ( 5) 00:07:51.257 11.520 - 11.569: 96.4659% ( 5) 00:07:51.257 11.569 - 11.618: 96.5581% ( 6) 00:07:51.257 11.618 - 11.668: 96.5734% ( 1) 00:07:51.257 11.668 - 11.717: 96.5888% ( 1) 00:07:51.257 11.963 - 12.012: 96.6042% ( 1) 00:07:51.257 12.062 - 12.111: 96.6195% ( 1) 00:07:51.257 12.603 - 12.702: 96.6349% ( 1) 00:07:51.257 13.095 - 13.194: 96.6503% ( 1) 00:07:51.257 13.194 - 13.292: 96.6964% ( 3) 00:07:51.257 13.292 - 13.391: 96.7117% ( 1) 00:07:51.257 13.489 - 13.588: 96.7578% ( 3) 00:07:51.257 13.588 - 13.686: 96.7886% ( 2) 00:07:51.257 13.686 - 13.785: 96.8193% ( 2) 00:07:51.257 13.785 - 13.883: 96.8347% ( 1) 00:07:51.257 13.883 - 13.982: 96.8654% ( 2) 00:07:51.257 13.982 - 14.080: 96.8808% ( 1) 00:07:51.257 14.080 - 14.178: 96.9269% ( 3) 00:07:51.257 14.178 - 14.277: 96.9730% ( 3) 00:07:51.257 14.277 - 14.375: 97.0652% ( 6) 00:07:51.257 14.375 - 14.474: 97.1266% ( 4) 00:07:51.257 14.572 - 14.671: 97.1420% ( 1) 00:07:51.257 14.671 - 14.769: 97.2034% ( 4) 00:07:51.257 14.868 - 14.966: 97.2495% ( 3) 00:07:51.257 14.966 - 15.065: 97.2803% ( 2) 00:07:51.257 15.065 - 15.163: 97.2956% ( 1) 00:07:51.257 15.262 - 15.360: 97.3110% ( 1) 00:07:51.257 15.360 - 15.458: 97.3264% ( 1) 00:07:51.257 15.458 - 15.557: 97.3571% ( 2) 00:07:51.257 15.557 - 15.655: 97.4339% ( 5) 00:07:51.257 15.655 - 15.754: 97.4954% ( 4) 00:07:51.257 15.754 - 15.852: 97.5415% ( 3) 00:07:51.257 15.852 - 15.951: 97.5722% ( 2) 00:07:51.257 15.951 - 16.049: 97.6183% ( 3) 00:07:51.257 16.049 - 16.148: 97.6951% ( 5) 00:07:51.257 16.148 - 16.246: 97.7105% ( 1) 00:07:51.257 16.246 - 16.345: 97.7259% ( 1) 00:07:51.257 16.345 - 16.443: 97.7412% ( 1) 00:07:51.257 16.443 - 16.542: 97.7720% ( 2) 00:07:51.257 16.542 - 16.640: 97.7873% ( 1) 00:07:51.257 16.640 - 16.738: 97.8334% ( 3) 00:07:51.257 16.738 - 16.837: 97.8642% ( 2) 00:07:51.257 16.935 - 17.034: 97.8795% ( 1) 00:07:51.257 17.034 - 17.132: 97.8949% ( 1) 00:07:51.257 17.132 - 17.231: 97.9410% ( 3) 00:07:51.257 17.231 - 17.329: 97.9717% ( 2) 00:07:51.257 17.428 - 17.526: 98.0025% ( 2) 00:07:51.257 17.625 - 17.723: 98.0332% ( 2) 00:07:51.257 17.723 - 17.822: 98.0947% ( 4) 00:07:51.257 17.822 - 17.920: 98.1100% ( 1) 00:07:51.257 17.920 - 18.018: 98.1254% ( 1) 00:07:51.257 18.018 - 18.117: 98.1407% ( 1) 00:07:51.257 18.215 - 18.314: 98.1561% ( 1) 00:07:51.257 18.511 - 18.609: 98.2022% ( 3) 00:07:51.257 18.609 - 18.708: 98.2176% ( 1) 00:07:51.257 18.806 - 18.905: 98.2329% ( 1) 00:07:51.257 19.003 - 19.102: 98.2483% ( 1) 00:07:51.257 19.397 - 19.495: 98.2637% ( 1) 00:07:51.258 19.988 - 20.086: 98.2790% ( 1) 00:07:51.258 20.283 - 20.382: 98.2944% ( 1) 00:07:51.258 20.775 - 20.874: 98.3098% ( 1) 00:07:51.258 21.563 - 21.662: 98.3251% ( 1) 00:07:51.258 22.252 - 22.351: 98.4327% ( 7) 00:07:51.258 22.351 - 22.449: 98.6478% ( 14) 00:07:51.258 22.449 - 22.548: 98.8629% ( 14) 00:07:51.258 22.548 - 22.646: 99.0934% ( 15) 00:07:51.258 22.646 - 22.745: 99.3239% ( 15) 00:07:51.258 22.745 - 22.843: 99.4622% ( 9) 00:07:51.258 22.843 - 22.942: 99.4776% ( 1) 00:07:51.258 23.040 - 23.138: 99.5237% ( 3) 00:07:51.258 23.138 - 23.237: 99.5390% ( 1) 00:07:51.258 23.237 - 23.335: 99.5851% ( 3) 00:07:51.258 23.335 - 23.434: 99.6159% ( 2) 00:07:51.258 23.532 - 23.631: 99.6466% ( 2) 00:07:51.258 23.631 - 23.729: 99.6620% ( 1) 00:07:51.258 23.926 - 24.025: 99.6773% ( 1) 00:07:51.258 24.911 - 25.009: 99.6927% ( 1) 00:07:51.258 25.403 - 25.600: 99.7081% ( 1) 00:07:51.258 25.600 - 25.797: 99.7234% ( 1) 00:07:51.258 25.797 - 25.994: 99.7388% ( 1) 00:07:51.258 26.585 - 26.782: 99.7541% ( 1) 00:07:51.258 27.569 - 27.766: 99.7695% ( 1) 00:07:51.258 29.342 - 29.538: 99.7849% ( 1) 00:07:51.258 29.735 - 29.932: 99.8002% ( 1) 00:07:51.258 32.689 - 32.886: 99.8156% ( 1) 00:07:51.258 34.265 - 34.462: 99.8310% ( 1) 00:07:51.258 37.022 - 37.218: 99.8617% ( 2) 00:07:51.258 39.778 - 39.975: 99.8771% ( 1) 00:07:51.258 39.975 - 40.172: 99.8924% ( 1) 00:07:51.258 41.748 - 41.945: 99.9078% ( 1) 00:07:51.258 43.717 - 43.914: 99.9232% ( 1) 00:07:51.258 47.852 - 48.049: 99.9385% ( 1) 00:07:51.258 55.532 - 55.926: 99.9539% ( 1) 00:07:51.258 66.166 - 66.560: 99.9693% ( 1) 00:07:51.258 70.105 - 70.498: 99.9846% ( 1) 00:07:51.258 341.858 - 343.434: 100.0000% ( 1) 00:07:51.258 00:07:51.258 00:07:51.258 real 0m1.242s 00:07:51.258 user 0m1.072s 00:07:51.258 sys 0m0.110s 00:07:51.258 18:17:09 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:51.258 18:17:09 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:07:51.258 ************************************ 00:07:51.258 END TEST nvme_overhead 00:07:51.258 ************************************ 00:07:51.258 18:17:09 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:51.258 18:17:09 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:07:51.258 18:17:09 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:51.258 18:17:09 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:51.258 ************************************ 00:07:51.258 START TEST nvme_arbitration 00:07:51.258 ************************************ 00:07:51.258 18:17:09 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:54.540 Initializing NVMe Controllers 00:07:54.540 Attached to 0000:00:13.0 00:07:54.540 Attached to 0000:00:10.0 00:07:54.540 Attached to 0000:00:11.0 00:07:54.540 Attached to 0000:00:12.0 00:07:54.540 Associating QEMU NVMe Ctrl (12343 ) with lcore 0 00:07:54.540 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:07:54.540 Associating QEMU NVMe Ctrl (12341 ) with lcore 2 00:07:54.540 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:07:54.540 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:07:54.540 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:07:54.540 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:07:54.540 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:07:54.540 Initialization complete. Launching workers. 00:07:54.540 Starting thread on core 1 with urgent priority queue 00:07:54.540 Starting thread on core 2 with urgent priority queue 00:07:54.540 Starting thread on core 3 with urgent priority queue 00:07:54.540 Starting thread on core 0 with urgent priority queue 00:07:54.540 QEMU NVMe Ctrl (12343 ) core 0: 896.00 IO/s 111.61 secs/100000 ios 00:07:54.540 QEMU NVMe Ctrl (12342 ) core 0: 896.00 IO/s 111.61 secs/100000 ios 00:07:54.540 QEMU NVMe Ctrl (12340 ) core 1: 938.67 IO/s 106.53 secs/100000 ios 00:07:54.540 QEMU NVMe Ctrl (12342 ) core 1: 938.67 IO/s 106.53 secs/100000 ios 00:07:54.540 QEMU NVMe Ctrl (12341 ) core 2: 981.33 IO/s 101.90 secs/100000 ios 00:07:54.540 QEMU NVMe Ctrl (12342 ) core 3: 1024.00 IO/s 97.66 secs/100000 ios 00:07:54.540 ======================================================== 00:07:54.540 00:07:54.540 00:07:54.540 real 0m3.300s 00:07:54.540 user 0m9.197s 00:07:54.540 sys 0m0.131s 00:07:54.540 18:17:12 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:54.540 ************************************ 00:07:54.540 END TEST nvme_arbitration 00:07:54.540 18:17:12 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:07:54.540 ************************************ 00:07:54.540 18:17:12 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:54.540 18:17:12 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:54.540 18:17:12 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:54.540 18:17:12 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:54.540 ************************************ 00:07:54.540 START TEST nvme_single_aen 00:07:54.540 ************************************ 00:07:54.540 18:17:13 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:54.801 Asynchronous Event Request test 00:07:54.801 Attached to 0000:00:13.0 00:07:54.801 Attached to 0000:00:10.0 00:07:54.801 Attached to 0000:00:11.0 00:07:54.801 Attached to 0000:00:12.0 00:07:54.801 Reset controller to setup AER completions for this process 00:07:54.801 Registering asynchronous event callbacks... 00:07:54.801 Getting orig temperature thresholds of all controllers 00:07:54.801 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:54.801 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:54.801 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:54.801 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:54.801 Setting all controllers temperature threshold low to trigger AER 00:07:54.801 Waiting for all controllers temperature threshold to be set lower 00:07:54.801 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:54.801 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:07:54.801 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:54.801 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:07:54.801 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:54.801 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:07:54.801 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:54.801 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:07:54.801 Waiting for all controllers to trigger AER and reset threshold 00:07:54.801 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:54.801 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:54.801 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:54.801 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:54.801 Cleaning up... 00:07:54.801 00:07:54.801 real 0m0.215s 00:07:54.802 user 0m0.080s 00:07:54.802 sys 0m0.095s 00:07:54.802 18:17:13 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:54.802 18:17:13 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:07:54.802 ************************************ 00:07:54.802 END TEST nvme_single_aen 00:07:54.802 ************************************ 00:07:54.802 18:17:13 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:07:54.802 18:17:13 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:54.802 18:17:13 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:54.802 18:17:13 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:54.802 ************************************ 00:07:54.802 START TEST nvme_doorbell_aers 00:07:54.802 ************************************ 00:07:54.802 18:17:13 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:07:54.802 18:17:13 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:07:54.802 18:17:13 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:07:54.802 18:17:13 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:07:54.802 18:17:13 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:07:54.802 18:17:13 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:54.802 18:17:13 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:07:54.802 18:17:13 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:54.802 18:17:13 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:54.802 18:17:13 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:54.802 18:17:13 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:54.802 18:17:13 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:54.802 18:17:13 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:54.802 18:17:13 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:07:55.063 [2024-11-20 18:17:13.523474] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63105) is not found. Dropping the request. 00:08:05.033 Executing: test_write_invalid_db 00:08:05.033 Waiting for AER completion... 00:08:05.033 Failure: test_write_invalid_db 00:08:05.033 00:08:05.033 Executing: test_invalid_db_write_overflow_sq 00:08:05.033 Waiting for AER completion... 00:08:05.033 Failure: test_invalid_db_write_overflow_sq 00:08:05.033 00:08:05.033 Executing: test_invalid_db_write_overflow_cq 00:08:05.033 Waiting for AER completion... 00:08:05.033 Failure: test_invalid_db_write_overflow_cq 00:08:05.033 00:08:05.033 18:17:23 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:05.033 18:17:23 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:05.033 [2024-11-20 18:17:23.546519] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63105) is not found. Dropping the request. 00:08:15.026 Executing: test_write_invalid_db 00:08:15.026 Waiting for AER completion... 00:08:15.026 Failure: test_write_invalid_db 00:08:15.026 00:08:15.026 Executing: test_invalid_db_write_overflow_sq 00:08:15.026 Waiting for AER completion... 00:08:15.026 Failure: test_invalid_db_write_overflow_sq 00:08:15.026 00:08:15.026 Executing: test_invalid_db_write_overflow_cq 00:08:15.026 Waiting for AER completion... 00:08:15.026 Failure: test_invalid_db_write_overflow_cq 00:08:15.026 00:08:15.026 18:17:33 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:15.026 18:17:33 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:15.026 [2024-11-20 18:17:33.589463] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63105) is not found. Dropping the request. 00:08:25.017 Executing: test_write_invalid_db 00:08:25.017 Waiting for AER completion... 00:08:25.017 Failure: test_write_invalid_db 00:08:25.017 00:08:25.017 Executing: test_invalid_db_write_overflow_sq 00:08:25.017 Waiting for AER completion... 00:08:25.017 Failure: test_invalid_db_write_overflow_sq 00:08:25.017 00:08:25.017 Executing: test_invalid_db_write_overflow_cq 00:08:25.017 Waiting for AER completion... 00:08:25.017 Failure: test_invalid_db_write_overflow_cq 00:08:25.017 00:08:25.017 18:17:43 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:25.017 18:17:43 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:25.017 [2024-11-20 18:17:43.609155] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63105) is not found. Dropping the request. 00:08:34.983 Executing: test_write_invalid_db 00:08:34.983 Waiting for AER completion... 00:08:34.983 Failure: test_write_invalid_db 00:08:34.983 00:08:34.983 Executing: test_invalid_db_write_overflow_sq 00:08:34.983 Waiting for AER completion... 00:08:34.983 Failure: test_invalid_db_write_overflow_sq 00:08:34.983 00:08:34.983 Executing: test_invalid_db_write_overflow_cq 00:08:34.983 Waiting for AER completion... 00:08:34.983 Failure: test_invalid_db_write_overflow_cq 00:08:34.983 00:08:34.983 00:08:34.983 real 0m40.171s 00:08:34.983 user 0m34.119s 00:08:34.983 sys 0m5.703s 00:08:34.983 18:17:53 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:34.983 18:17:53 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:08:34.983 ************************************ 00:08:34.983 END TEST nvme_doorbell_aers 00:08:34.983 ************************************ 00:08:34.983 18:17:53 nvme -- nvme/nvme.sh@97 -- # uname 00:08:34.983 18:17:53 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:08:34.983 18:17:53 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:34.983 18:17:53 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:34.983 18:17:53 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:34.983 18:17:53 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:34.983 ************************************ 00:08:34.983 START TEST nvme_multi_aen 00:08:34.983 ************************************ 00:08:34.983 18:17:53 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:35.242 [2024-11-20 18:17:53.649322] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63105) is not found. Dropping the request. 00:08:35.242 [2024-11-20 18:17:53.649701] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63105) is not found. Dropping the request. 00:08:35.242 [2024-11-20 18:17:53.649749] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63105) is not found. Dropping the request. 00:08:35.242 [2024-11-20 18:17:53.651058] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63105) is not found. Dropping the request. 00:08:35.242 [2024-11-20 18:17:53.651161] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63105) is not found. Dropping the request. 00:08:35.242 [2024-11-20 18:17:53.651199] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63105) is not found. Dropping the request. 00:08:35.242 [2024-11-20 18:17:53.652124] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63105) is not found. Dropping the request. 00:08:35.242 [2024-11-20 18:17:53.652192] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63105) is not found. Dropping the request. 00:08:35.242 [2024-11-20 18:17:53.652223] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63105) is not found. Dropping the request. 00:08:35.242 [2024-11-20 18:17:53.653149] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63105) is not found. Dropping the request. 00:08:35.242 [2024-11-20 18:17:53.653209] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63105) is not found. Dropping the request. 00:08:35.242 [2024-11-20 18:17:53.653241] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63105) is not found. Dropping the request. 00:08:35.243 Child process pid: 63625 00:08:35.243 [Child] Asynchronous Event Request test 00:08:35.243 [Child] Attached to 0000:00:13.0 00:08:35.243 [Child] Attached to 0000:00:10.0 00:08:35.243 [Child] Attached to 0000:00:11.0 00:08:35.243 [Child] Attached to 0000:00:12.0 00:08:35.243 [Child] Registering asynchronous event callbacks... 00:08:35.243 [Child] Getting orig temperature thresholds of all controllers 00:08:35.243 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:35.243 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:35.243 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:35.243 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:35.243 [Child] Waiting for all controllers to trigger AER and reset threshold 00:08:35.243 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:35.243 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:35.243 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:35.243 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:35.243 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:35.243 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:35.243 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:35.243 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:35.243 [Child] Cleaning up... 00:08:35.501 Asynchronous Event Request test 00:08:35.501 Attached to 0000:00:13.0 00:08:35.501 Attached to 0000:00:10.0 00:08:35.501 Attached to 0000:00:11.0 00:08:35.501 Attached to 0000:00:12.0 00:08:35.501 Reset controller to setup AER completions for this process 00:08:35.501 Registering asynchronous event callbacks... 00:08:35.501 Getting orig temperature thresholds of all controllers 00:08:35.501 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:35.501 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:35.501 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:35.501 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:35.501 Setting all controllers temperature threshold low to trigger AER 00:08:35.501 Waiting for all controllers temperature threshold to be set lower 00:08:35.501 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:35.501 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:35.501 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:35.501 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:35.501 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:35.501 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:35.501 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:35.501 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:35.501 Waiting for all controllers to trigger AER and reset threshold 00:08:35.501 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:35.501 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:35.501 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:35.501 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:35.501 Cleaning up... 00:08:35.501 00:08:35.501 real 0m0.411s 00:08:35.501 user 0m0.130s 00:08:35.501 sys 0m0.181s 00:08:35.501 18:17:53 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:35.501 18:17:53 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:08:35.501 ************************************ 00:08:35.501 END TEST nvme_multi_aen 00:08:35.501 ************************************ 00:08:35.501 18:17:53 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:35.501 18:17:53 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:35.501 18:17:53 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:35.501 18:17:53 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:35.501 ************************************ 00:08:35.501 START TEST nvme_startup 00:08:35.501 ************************************ 00:08:35.501 18:17:53 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:35.501 Initializing NVMe Controllers 00:08:35.501 Attached to 0000:00:13.0 00:08:35.501 Attached to 0000:00:10.0 00:08:35.501 Attached to 0000:00:11.0 00:08:35.501 Attached to 0000:00:12.0 00:08:35.501 Initialization complete. 00:08:35.501 Time used:150045.297 (us). 00:08:35.760 00:08:35.760 real 0m0.212s 00:08:35.760 user 0m0.072s 00:08:35.760 sys 0m0.096s 00:08:35.760 ************************************ 00:08:35.760 END TEST nvme_startup 00:08:35.760 ************************************ 00:08:35.760 18:17:54 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:35.760 18:17:54 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:08:35.760 18:17:54 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:08:35.760 18:17:54 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:35.760 18:17:54 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:35.760 18:17:54 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:35.760 ************************************ 00:08:35.760 START TEST nvme_multi_secondary 00:08:35.760 ************************************ 00:08:35.760 18:17:54 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:08:35.760 18:17:54 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=63676 00:08:35.760 18:17:54 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=63677 00:08:35.760 18:17:54 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:08:35.760 18:17:54 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:08:35.760 18:17:54 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:39.043 Initializing NVMe Controllers 00:08:39.043 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:39.043 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:39.043 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:39.043 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:39.043 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:39.043 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:39.043 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:39.043 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:39.043 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:39.043 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:39.043 Initialization complete. Launching workers. 00:08:39.043 ======================================================== 00:08:39.043 Latency(us) 00:08:39.044 Device Information : IOPS MiB/s Average min max 00:08:39.044 PCIE (0000:00:13.0) NSID 1 from core 1: 6751.71 26.37 2369.36 937.09 12634.02 00:08:39.044 PCIE (0000:00:10.0) NSID 1 from core 1: 6751.71 26.37 2368.37 903.92 14982.37 00:08:39.044 PCIE (0000:00:11.0) NSID 1 from core 1: 6751.71 26.37 2369.73 918.50 14657.78 00:08:39.044 PCIE (0000:00:12.0) NSID 1 from core 1: 6751.71 26.37 2371.09 923.16 13999.73 00:08:39.044 PCIE (0000:00:12.0) NSID 2 from core 1: 6751.71 26.37 2371.09 924.01 12960.32 00:08:39.044 PCIE (0000:00:12.0) NSID 3 from core 1: 6751.71 26.37 2371.04 925.29 13837.50 00:08:39.044 ======================================================== 00:08:39.044 Total : 40510.27 158.24 2370.11 903.92 14982.37 00:08:39.044 00:08:39.044 Initializing NVMe Controllers 00:08:39.044 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:39.044 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:39.044 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:39.044 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:39.044 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:39.044 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:39.044 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:39.044 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:39.044 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:39.044 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:39.044 Initialization complete. Launching workers. 00:08:39.044 ======================================================== 00:08:39.044 Latency(us) 00:08:39.044 Device Information : IOPS MiB/s Average min max 00:08:39.044 PCIE (0000:00:13.0) NSID 1 from core 2: 2648.10 10.34 6041.62 1356.89 29394.58 00:08:39.044 PCIE (0000:00:10.0) NSID 1 from core 2: 2648.10 10.34 6043.02 1333.38 28937.88 00:08:39.044 PCIE (0000:00:11.0) NSID 1 from core 2: 2648.10 10.34 6044.83 1364.00 30499.03 00:08:39.044 PCIE (0000:00:12.0) NSID 1 from core 2: 2648.10 10.34 6044.23 1388.03 31491.49 00:08:39.044 PCIE (0000:00:12.0) NSID 2 from core 2: 2648.10 10.34 6044.07 1419.04 34988.76 00:08:39.044 PCIE (0000:00:12.0) NSID 3 from core 2: 2648.10 10.34 6044.39 1440.37 36016.31 00:08:39.044 ======================================================== 00:08:39.044 Total : 15888.57 62.06 6043.69 1333.38 36016.31 00:08:39.044 00:08:39.044 18:17:57 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 63676 00:08:40.958 Initializing NVMe Controllers 00:08:40.958 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:40.958 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:40.958 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:40.958 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:40.958 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:40.958 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:40.958 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:40.958 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:40.958 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:40.958 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:40.958 Initialization complete. Launching workers. 00:08:40.958 ======================================================== 00:08:40.958 Latency(us) 00:08:40.958 Device Information : IOPS MiB/s Average min max 00:08:40.958 PCIE (0000:00:13.0) NSID 1 from core 0: 7251.88 28.33 2205.92 723.20 12585.35 00:08:40.958 PCIE (0000:00:10.0) NSID 1 from core 0: 7251.88 28.33 2205.16 697.67 13472.08 00:08:40.958 PCIE (0000:00:11.0) NSID 1 from core 0: 7251.88 28.33 2206.10 724.08 12746.04 00:08:40.958 PCIE (0000:00:12.0) NSID 1 from core 0: 7251.88 28.33 2206.08 715.90 12395.62 00:08:40.958 PCIE (0000:00:12.0) NSID 2 from core 0: 7251.88 28.33 2206.06 719.14 13756.37 00:08:40.958 PCIE (0000:00:12.0) NSID 3 from core 0: 7251.88 28.33 2206.04 724.43 14090.09 00:08:40.958 ======================================================== 00:08:40.958 Total : 43511.30 169.97 2205.89 697.67 14090.09 00:08:40.958 00:08:40.958 18:17:59 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 63677 00:08:40.959 18:17:59 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=63746 00:08:40.959 18:17:59 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:08:40.959 18:17:59 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=63747 00:08:40.959 18:17:59 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:08:40.959 18:17:59 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:44.256 Initializing NVMe Controllers 00:08:44.256 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:44.256 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:44.256 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:44.256 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:44.256 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:44.256 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:44.256 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:44.256 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:44.256 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:44.256 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:44.256 Initialization complete. Launching workers. 00:08:44.256 ======================================================== 00:08:44.256 Latency(us) 00:08:44.256 Device Information : IOPS MiB/s Average min max 00:08:44.256 PCIE (0000:00:13.0) NSID 1 from core 0: 4193.01 16.38 3815.39 724.78 14712.01 00:08:44.256 PCIE (0000:00:10.0) NSID 1 from core 0: 4193.01 16.38 3814.94 705.89 14145.55 00:08:44.256 PCIE (0000:00:11.0) NSID 1 from core 0: 4193.01 16.38 3816.40 724.45 14096.82 00:08:44.256 PCIE (0000:00:12.0) NSID 1 from core 0: 4193.01 16.38 3816.90 724.06 12389.23 00:08:44.256 PCIE (0000:00:12.0) NSID 2 from core 0: 4193.01 16.38 3816.89 715.88 12459.67 00:08:44.256 PCIE (0000:00:12.0) NSID 3 from core 0: 4198.33 16.40 3812.07 718.40 12831.78 00:08:44.256 ======================================================== 00:08:44.256 Total : 25163.36 98.29 3815.43 705.89 14712.01 00:08:44.256 00:08:44.256 Initializing NVMe Controllers 00:08:44.256 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:44.256 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:44.256 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:44.256 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:44.256 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:44.256 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:44.256 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:44.256 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:44.256 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:44.256 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:44.256 Initialization complete. Launching workers. 00:08:44.256 ======================================================== 00:08:44.256 Latency(us) 00:08:44.256 Device Information : IOPS MiB/s Average min max 00:08:44.256 PCIE (0000:00:13.0) NSID 1 from core 1: 4082.95 15.95 3918.27 1037.75 13708.23 00:08:44.256 PCIE (0000:00:10.0) NSID 1 from core 1: 4082.95 15.95 3917.69 1160.82 14256.58 00:08:44.256 PCIE (0000:00:11.0) NSID 1 from core 1: 4082.95 15.95 3919.33 1042.23 15320.58 00:08:44.256 PCIE (0000:00:12.0) NSID 1 from core 1: 4082.95 15.95 3920.67 1068.00 13153.20 00:08:44.256 PCIE (0000:00:12.0) NSID 2 from core 1: 4082.95 15.95 3921.39 1060.84 12927.75 00:08:44.256 PCIE (0000:00:12.0) NSID 3 from core 1: 4082.95 15.95 3922.42 1079.00 12385.32 00:08:44.256 ======================================================== 00:08:44.256 Total : 24497.68 95.69 3919.96 1037.75 15320.58 00:08:44.256 00:08:46.796 Initializing NVMe Controllers 00:08:46.796 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:46.796 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:46.796 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:46.796 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:46.796 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:46.796 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:46.796 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:46.796 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:46.796 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:46.796 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:46.796 Initialization complete. Launching workers. 00:08:46.796 ======================================================== 00:08:46.796 Latency(us) 00:08:46.796 Device Information : IOPS MiB/s Average min max 00:08:46.796 PCIE (0000:00:13.0) NSID 1 from core 2: 2389.25 9.33 6695.90 759.47 33099.99 00:08:46.796 PCIE (0000:00:10.0) NSID 1 from core 2: 2389.25 9.33 6695.46 745.45 40055.75 00:08:46.796 PCIE (0000:00:11.0) NSID 1 from core 2: 2389.25 9.33 6696.66 729.29 37762.50 00:08:46.796 PCIE (0000:00:12.0) NSID 1 from core 2: 2389.25 9.33 6696.86 765.30 40511.30 00:08:46.796 PCIE (0000:00:12.0) NSID 2 from core 2: 2389.25 9.33 6696.06 764.67 31291.64 00:08:46.796 PCIE (0000:00:12.0) NSID 3 from core 2: 2389.25 9.33 6696.61 762.48 32716.20 00:08:46.796 ======================================================== 00:08:46.796 Total : 14335.49 56.00 6696.26 729.29 40511.30 00:08:46.796 00:08:46.796 18:18:04 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 63746 00:08:46.796 18:18:04 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 63747 00:08:46.796 00:08:46.796 real 0m10.725s 00:08:46.796 user 0m18.343s 00:08:46.796 sys 0m0.675s 00:08:46.796 18:18:04 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:46.796 18:18:04 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:08:46.796 ************************************ 00:08:46.796 END TEST nvme_multi_secondary 00:08:46.796 ************************************ 00:08:46.796 18:18:04 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:08:46.796 18:18:04 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:08:46.796 18:18:04 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/62703 ]] 00:08:46.796 18:18:04 nvme -- common/autotest_common.sh@1094 -- # kill 62703 00:08:46.796 18:18:04 nvme -- common/autotest_common.sh@1095 -- # wait 62703 00:08:46.796 [2024-11-20 18:18:04.938981] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63624) is not found. Dropping the request. 00:08:46.796 [2024-11-20 18:18:04.939088] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63624) is not found. Dropping the request. 00:08:46.796 [2024-11-20 18:18:04.939153] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63624) is not found. Dropping the request. 00:08:46.796 [2024-11-20 18:18:04.939182] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63624) is not found. Dropping the request. 00:08:46.796 [2024-11-20 18:18:04.943555] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63624) is not found. Dropping the request. 00:08:46.796 [2024-11-20 18:18:04.943654] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63624) is not found. Dropping the request. 00:08:46.796 [2024-11-20 18:18:04.943684] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63624) is not found. Dropping the request. 00:08:46.796 [2024-11-20 18:18:04.943712] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63624) is not found. Dropping the request. 00:08:46.796 [2024-11-20 18:18:04.947934] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63624) is not found. Dropping the request. 00:08:46.797 [2024-11-20 18:18:04.948036] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63624) is not found. Dropping the request. 00:08:46.797 [2024-11-20 18:18:04.948065] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63624) is not found. Dropping the request. 00:08:46.797 [2024-11-20 18:18:04.948114] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63624) is not found. Dropping the request. 00:08:46.797 [2024-11-20 18:18:04.952121] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63624) is not found. Dropping the request. 00:08:46.797 [2024-11-20 18:18:04.952206] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63624) is not found. Dropping the request. 00:08:46.797 [2024-11-20 18:18:04.952233] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63624) is not found. Dropping the request. 00:08:46.797 [2024-11-20 18:18:04.952261] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63624) is not found. Dropping the request. 00:08:46.797 18:18:05 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:08:46.797 18:18:05 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:08:46.797 18:18:05 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:46.797 18:18:05 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:46.797 18:18:05 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:46.797 18:18:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:46.797 ************************************ 00:08:46.797 START TEST bdev_nvme_reset_stuck_adm_cmd 00:08:46.797 ************************************ 00:08:46.797 18:18:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:46.797 * Looking for test storage... 00:08:46.797 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:46.797 18:18:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:46.797 18:18:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:46.797 18:18:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lcov --version 00:08:46.797 18:18:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:46.797 18:18:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:46.797 18:18:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:46.797 18:18:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:46.797 18:18:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:08:46.797 18:18:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:08:46.797 18:18:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:08:46.797 18:18:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:08:46.797 18:18:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:08:46.797 18:18:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:08:46.797 18:18:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:08:46.797 18:18:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:46.797 18:18:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:08:46.797 18:18:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:08:46.797 18:18:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:46.797 18:18:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:46.797 18:18:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:08:46.797 18:18:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:08:46.797 18:18:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:46.797 18:18:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:08:46.797 18:18:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:08:46.797 18:18:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:08:46.797 18:18:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:08:46.797 18:18:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:46.797 18:18:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:08:46.797 18:18:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:08:46.797 18:18:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:46.797 18:18:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:46.797 18:18:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:08:46.797 18:18:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:46.797 18:18:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:46.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.797 --rc genhtml_branch_coverage=1 00:08:46.797 --rc genhtml_function_coverage=1 00:08:46.797 --rc genhtml_legend=1 00:08:46.797 --rc geninfo_all_blocks=1 00:08:46.797 --rc geninfo_unexecuted_blocks=1 00:08:46.797 00:08:46.797 ' 00:08:46.797 18:18:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:46.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.797 --rc genhtml_branch_coverage=1 00:08:46.797 --rc genhtml_function_coverage=1 00:08:46.797 --rc genhtml_legend=1 00:08:46.797 --rc geninfo_all_blocks=1 00:08:46.797 --rc geninfo_unexecuted_blocks=1 00:08:46.797 00:08:46.797 ' 00:08:46.797 18:18:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:46.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.797 --rc genhtml_branch_coverage=1 00:08:46.797 --rc genhtml_function_coverage=1 00:08:46.797 --rc genhtml_legend=1 00:08:46.797 --rc geninfo_all_blocks=1 00:08:46.797 --rc geninfo_unexecuted_blocks=1 00:08:46.797 00:08:46.797 ' 00:08:46.797 18:18:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:46.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.797 --rc genhtml_branch_coverage=1 00:08:46.797 --rc genhtml_function_coverage=1 00:08:46.797 --rc genhtml_legend=1 00:08:46.797 --rc geninfo_all_blocks=1 00:08:46.797 --rc geninfo_unexecuted_blocks=1 00:08:46.797 00:08:46.797 ' 00:08:46.797 18:18:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:08:46.797 18:18:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:08:46.797 18:18:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:08:46.797 18:18:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:08:46.797 18:18:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:08:46.797 18:18:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:08:46.797 18:18:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:08:46.797 18:18:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:08:46.797 18:18:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:08:46.797 18:18:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:08:46.797 18:18:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:46.797 18:18:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:08:46.797 18:18:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:46.797 18:18:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:46.797 18:18:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:46.797 18:18:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:46.797 18:18:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:46.797 18:18:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:08:46.797 18:18:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:08:46.797 18:18:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:08:46.797 18:18:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=63912 00:08:46.797 18:18:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:46.797 18:18:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 63912 00:08:46.798 18:18:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 63912 ']' 00:08:46.798 18:18:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:46.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:46.798 18:18:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:46.798 18:18:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:46.798 18:18:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:46.798 18:18:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:08:46.798 18:18:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:46.798 [2024-11-20 18:18:05.380070] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:08:46.798 [2024-11-20 18:18:05.380201] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63912 ] 00:08:47.059 [2024-11-20 18:18:05.549969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:47.059 [2024-11-20 18:18:05.649208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:47.059 [2024-11-20 18:18:05.649837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:47.059 [2024-11-20 18:18:05.650116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.059 [2024-11-20 18:18:05.650157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:47.631 18:18:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:47.631 18:18:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:08:47.631 18:18:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:08:47.631 18:18:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.631 18:18:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:47.892 nvme0n1 00:08:47.892 18:18:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.892 18:18:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:08:47.892 18:18:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_X4MtX.txt 00:08:47.892 18:18:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:08:47.892 18:18:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.892 18:18:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:47.892 true 00:08:47.892 18:18:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.892 18:18:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:08:47.892 18:18:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1732126686 00:08:47.892 18:18:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=63935 00:08:47.892 18:18:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:47.892 18:18:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:08:47.892 18:18:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:08:49.808 18:18:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:08:49.808 18:18:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.808 18:18:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:49.808 [2024-11-20 18:18:08.329556] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:08:49.809 [2024-11-20 18:18:08.330323] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:08:49.809 [2024-11-20 18:18:08.330446] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:08:49.809 [2024-11-20 18:18:08.330503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:49.809 [2024-11-20 18:18:08.333943] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:08:49.809 18:18:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.809 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 63935 00:08:49.809 18:18:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 63935 00:08:49.809 18:18:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 63935 00:08:49.809 18:18:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:08:49.809 18:18:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:08:49.809 18:18:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:08:49.809 18:18:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.809 18:18:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:49.809 18:18:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.809 18:18:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:08:49.809 18:18:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_X4MtX.txt 00:08:49.809 18:18:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:08:49.809 18:18:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:08:49.809 18:18:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:49.809 18:18:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:49.809 18:18:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:49.809 18:18:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:49.809 18:18:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:49.809 18:18:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:49.809 18:18:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:08:49.809 18:18:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:08:49.809 18:18:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:08:49.809 18:18:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:49.809 18:18:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:49.809 18:18:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:49.809 18:18:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:49.809 18:18:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:49.809 18:18:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:49.809 18:18:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:08:49.809 18:18:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:08:49.809 18:18:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_X4MtX.txt 00:08:49.809 18:18:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 63912 00:08:49.809 18:18:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 63912 ']' 00:08:49.809 18:18:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 63912 00:08:49.809 18:18:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:08:49.809 18:18:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:49.809 18:18:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63912 00:08:50.069 killing process with pid 63912 00:08:50.069 18:18:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:50.069 18:18:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:50.069 18:18:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63912' 00:08:50.069 18:18:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 63912 00:08:50.069 18:18:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 63912 00:08:51.445 18:18:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:08:51.445 18:18:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:08:51.445 00:08:51.445 real 0m4.728s 00:08:51.445 user 0m16.754s 00:08:51.445 sys 0m0.535s 00:08:51.445 18:18:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:51.445 18:18:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:51.445 ************************************ 00:08:51.445 END TEST bdev_nvme_reset_stuck_adm_cmd 00:08:51.445 ************************************ 00:08:51.445 18:18:09 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:08:51.445 18:18:09 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:08:51.445 18:18:09 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:51.445 18:18:09 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:51.445 18:18:09 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:51.445 ************************************ 00:08:51.445 START TEST nvme_fio 00:08:51.445 ************************************ 00:08:51.445 18:18:09 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:08:51.445 18:18:09 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:08:51.445 18:18:09 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:08:51.445 18:18:09 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:08:51.445 18:18:09 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:51.445 18:18:09 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:08:51.445 18:18:09 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:51.445 18:18:09 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:51.445 18:18:09 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:51.445 18:18:09 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:51.445 18:18:09 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:51.445 18:18:09 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:08:51.445 18:18:09 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:08:51.445 18:18:09 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:51.445 18:18:09 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:51.445 18:18:09 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:51.704 18:18:10 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:51.704 18:18:10 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:51.964 18:18:10 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:51.964 18:18:10 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:51.964 18:18:10 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:51.964 18:18:10 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:51.964 18:18:10 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:51.964 18:18:10 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:51.964 18:18:10 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:51.964 18:18:10 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:51.964 18:18:10 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:51.964 18:18:10 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:51.964 18:18:10 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:51.964 18:18:10 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:51.964 18:18:10 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:51.964 18:18:10 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:51.964 18:18:10 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:51.964 18:18:10 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:51.964 18:18:10 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:51.964 18:18:10 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:51.964 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:51.964 fio-3.35 00:08:51.964 Starting 1 thread 00:08:57.232 00:08:57.232 test: (groupid=0, jobs=1): err= 0: pid=64072: Wed Nov 20 18:18:15 2024 00:08:57.232 read: IOPS=24.1k, BW=94.1MiB/s (98.7MB/s)(188MiB/2001msec) 00:08:57.232 slat (usec): min=3, max=135, avg= 4.94, stdev= 2.25 00:08:57.232 clat (usec): min=368, max=11504, avg=2651.90, stdev=816.72 00:08:57.232 lat (usec): min=373, max=11558, avg=2656.84, stdev=818.00 00:08:57.232 clat percentiles (usec): 00:08:57.232 | 1.00th=[ 1549], 5.00th=[ 2057], 10.00th=[ 2245], 20.00th=[ 2343], 00:08:57.232 | 30.00th=[ 2376], 40.00th=[ 2409], 50.00th=[ 2442], 60.00th=[ 2474], 00:08:57.232 | 70.00th=[ 2507], 80.00th=[ 2704], 90.00th=[ 3261], 95.00th=[ 4490], 00:08:57.232 | 99.00th=[ 6325], 99.50th=[ 6718], 99.90th=[ 8291], 99.95th=[ 8455], 00:08:57.232 | 99.99th=[11076] 00:08:57.232 bw ( KiB/s): min=93104, max=99288, per=98.75%, avg=95189.33, stdev=3549.73, samples=3 00:08:57.232 iops : min=23276, max=24822, avg=23797.33, stdev=887.43, samples=3 00:08:57.232 write: IOPS=23.9k, BW=93.5MiB/s (98.0MB/s)(187MiB/2001msec); 0 zone resets 00:08:57.232 slat (usec): min=3, max=215, avg= 5.15, stdev= 2.62 00:08:57.232 clat (usec): min=340, max=11288, avg=2655.64, stdev=818.42 00:08:57.232 lat (usec): min=344, max=11305, avg=2660.79, stdev=819.71 00:08:57.232 clat percentiles (usec): 00:08:57.232 | 1.00th=[ 1565], 5.00th=[ 2057], 10.00th=[ 2245], 20.00th=[ 2343], 00:08:57.232 | 30.00th=[ 2376], 40.00th=[ 2409], 50.00th=[ 2442], 60.00th=[ 2474], 00:08:57.232 | 70.00th=[ 2507], 80.00th=[ 2704], 90.00th=[ 3294], 95.00th=[ 4424], 00:08:57.232 | 99.00th=[ 6325], 99.50th=[ 6718], 99.90th=[ 8291], 99.95th=[ 8848], 00:08:57.232 | 99.99th=[10814] 00:08:57.232 bw ( KiB/s): min=92160, max=100632, per=99.43%, avg=95200.00, stdev=4715.39, samples=3 00:08:57.232 iops : min=23040, max=25158, avg=23800.00, stdev=1178.85, samples=3 00:08:57.232 lat (usec) : 500=0.02%, 750=0.03%, 1000=0.09% 00:08:57.232 lat (msec) : 2=4.04%, 4=89.48%, 10=6.32%, 20=0.03% 00:08:57.232 cpu : usr=98.90%, sys=0.15%, ctx=30, majf=0, minf=607 00:08:57.232 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:57.232 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:57.232 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:57.232 issued rwts: total=48220,47897,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:57.233 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:57.233 00:08:57.233 Run status group 0 (all jobs): 00:08:57.233 READ: bw=94.1MiB/s (98.7MB/s), 94.1MiB/s-94.1MiB/s (98.7MB/s-98.7MB/s), io=188MiB (198MB), run=2001-2001msec 00:08:57.233 WRITE: bw=93.5MiB/s (98.0MB/s), 93.5MiB/s-93.5MiB/s (98.0MB/s-98.0MB/s), io=187MiB (196MB), run=2001-2001msec 00:08:57.233 ----------------------------------------------------- 00:08:57.233 Suppressions used: 00:08:57.233 count bytes template 00:08:57.233 1 32 /usr/src/fio/parse.c 00:08:57.233 1 8 libtcmalloc_minimal.so 00:08:57.233 ----------------------------------------------------- 00:08:57.233 00:08:57.233 18:18:15 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:57.233 18:18:15 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:57.233 18:18:15 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:57.233 18:18:15 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:57.233 18:18:15 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:57.233 18:18:15 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:57.491 18:18:15 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:57.491 18:18:15 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:57.491 18:18:15 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:57.491 18:18:15 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:57.491 18:18:15 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:57.491 18:18:15 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:57.491 18:18:15 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:57.491 18:18:15 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:57.491 18:18:15 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:57.491 18:18:15 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:57.491 18:18:15 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:57.491 18:18:15 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:57.491 18:18:15 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:57.491 18:18:15 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:57.491 18:18:15 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:57.491 18:18:15 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:57.491 18:18:15 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:57.491 18:18:15 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:57.491 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:57.491 fio-3.35 00:08:57.491 Starting 1 thread 00:09:04.055 00:09:04.055 test: (groupid=0, jobs=1): err= 0: pid=64128: Wed Nov 20 18:18:22 2024 00:09:04.055 read: IOPS=24.3k, BW=95.0MiB/s (99.6MB/s)(190MiB/2001msec) 00:09:04.055 slat (nsec): min=3394, max=91001, avg=5061.85, stdev=2309.29 00:09:04.055 clat (usec): min=219, max=8561, avg=2630.66, stdev=850.85 00:09:04.055 lat (usec): min=224, max=8573, avg=2635.73, stdev=852.39 00:09:04.055 clat percentiles (usec): 00:09:04.055 | 1.00th=[ 1762], 5.00th=[ 2114], 10.00th=[ 2245], 20.00th=[ 2311], 00:09:04.055 | 30.00th=[ 2343], 40.00th=[ 2343], 50.00th=[ 2376], 60.00th=[ 2409], 00:09:04.055 | 70.00th=[ 2442], 80.00th=[ 2540], 90.00th=[ 3261], 95.00th=[ 4817], 00:09:04.055 | 99.00th=[ 6194], 99.50th=[ 6652], 99.90th=[ 7963], 99.95th=[ 8291], 00:09:04.055 | 99.99th=[ 8455] 00:09:04.055 bw ( KiB/s): min=91488, max=99128, per=98.94%, avg=96229.33, stdev=4139.92, samples=3 00:09:04.055 iops : min=22874, max=24780, avg=24057.33, stdev=1033.14, samples=3 00:09:04.055 write: IOPS=24.2k, BW=94.4MiB/s (98.9MB/s)(189MiB/2001msec); 0 zone resets 00:09:04.055 slat (nsec): min=3489, max=57036, avg=5283.09, stdev=2223.11 00:09:04.055 clat (usec): min=267, max=8600, avg=2628.87, stdev=842.14 00:09:04.055 lat (usec): min=272, max=8612, avg=2634.15, stdev=843.61 00:09:04.055 clat percentiles (usec): 00:09:04.055 | 1.00th=[ 1762], 5.00th=[ 2114], 10.00th=[ 2245], 20.00th=[ 2311], 00:09:04.055 | 30.00th=[ 2343], 40.00th=[ 2343], 50.00th=[ 2376], 60.00th=[ 2409], 00:09:04.055 | 70.00th=[ 2442], 80.00th=[ 2540], 90.00th=[ 3228], 95.00th=[ 4817], 00:09:04.055 | 99.00th=[ 6194], 99.50th=[ 6521], 99.90th=[ 7898], 99.95th=[ 8225], 00:09:04.055 | 99.99th=[ 8586] 00:09:04.055 bw ( KiB/s): min=90976, max=100336, per=99.70%, avg=96336.00, stdev=4825.93, samples=3 00:09:04.055 iops : min=22744, max=25084, avg=24084.00, stdev=1206.48, samples=3 00:09:04.055 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:09:04.055 lat (msec) : 2=2.67%, 4=89.71%, 10=7.58% 00:09:04.055 cpu : usr=99.35%, sys=0.00%, ctx=4, majf=0, minf=608 00:09:04.055 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:04.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:04.055 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:04.055 issued rwts: total=48652,48336,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:04.055 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:04.055 00:09:04.055 Run status group 0 (all jobs): 00:09:04.055 READ: bw=95.0MiB/s (99.6MB/s), 95.0MiB/s-95.0MiB/s (99.6MB/s-99.6MB/s), io=190MiB (199MB), run=2001-2001msec 00:09:04.055 WRITE: bw=94.4MiB/s (98.9MB/s), 94.4MiB/s-94.4MiB/s (98.9MB/s-98.9MB/s), io=189MiB (198MB), run=2001-2001msec 00:09:04.316 ----------------------------------------------------- 00:09:04.316 Suppressions used: 00:09:04.316 count bytes template 00:09:04.316 1 32 /usr/src/fio/parse.c 00:09:04.316 1 8 libtcmalloc_minimal.so 00:09:04.316 ----------------------------------------------------- 00:09:04.316 00:09:04.316 18:18:22 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:04.316 18:18:22 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:04.316 18:18:22 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:04.316 18:18:22 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:04.577 18:18:22 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:04.577 18:18:22 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:04.577 18:18:23 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:04.577 18:18:23 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:04.577 18:18:23 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:04.577 18:18:23 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:04.577 18:18:23 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:04.577 18:18:23 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:04.577 18:18:23 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:04.577 18:18:23 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:04.577 18:18:23 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:04.577 18:18:23 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:04.577 18:18:23 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:04.577 18:18:23 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:04.577 18:18:23 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:04.837 18:18:23 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:04.838 18:18:23 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:04.838 18:18:23 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:04.838 18:18:23 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:04.838 18:18:23 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:04.838 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:04.838 fio-3.35 00:09:04.838 Starting 1 thread 00:09:11.412 00:09:11.412 test: (groupid=0, jobs=1): err= 0: pid=64189: Wed Nov 20 18:18:29 2024 00:09:11.412 read: IOPS=21.5k, BW=84.0MiB/s (88.0MB/s)(168MiB/2001msec) 00:09:11.412 slat (nsec): min=3353, max=73280, avg=5126.88, stdev=2427.20 00:09:11.412 clat (usec): min=762, max=10939, avg=2970.00, stdev=1141.34 00:09:11.412 lat (usec): min=773, max=11012, avg=2975.13, stdev=1142.54 00:09:11.412 clat percentiles (usec): 00:09:11.412 | 1.00th=[ 1860], 5.00th=[ 2089], 10.00th=[ 2212], 20.00th=[ 2311], 00:09:11.412 | 30.00th=[ 2343], 40.00th=[ 2409], 50.00th=[ 2474], 60.00th=[ 2638], 00:09:11.412 | 70.00th=[ 2868], 80.00th=[ 3359], 90.00th=[ 4752], 95.00th=[ 5669], 00:09:11.412 | 99.00th=[ 6980], 99.50th=[ 7439], 99.90th=[ 8225], 99.95th=[ 8455], 00:09:11.412 | 99.99th=[10814] 00:09:11.412 bw ( KiB/s): min=78336, max=92840, per=100.00%, avg=87125.33, stdev=7725.39, samples=3 00:09:11.412 iops : min=19584, max=23210, avg=21781.33, stdev=1931.35, samples=3 00:09:11.412 write: IOPS=21.3k, BW=83.3MiB/s (87.3MB/s)(167MiB/2001msec); 0 zone resets 00:09:11.412 slat (nsec): min=3480, max=62331, avg=5278.45, stdev=2487.96 00:09:11.412 clat (usec): min=806, max=10858, avg=2986.03, stdev=1145.64 00:09:11.412 lat (usec): min=811, max=10878, avg=2991.31, stdev=1146.81 00:09:11.412 clat percentiles (usec): 00:09:11.412 | 1.00th=[ 1876], 5.00th=[ 2089], 10.00th=[ 2245], 20.00th=[ 2311], 00:09:11.412 | 30.00th=[ 2343], 40.00th=[ 2409], 50.00th=[ 2507], 60.00th=[ 2671], 00:09:11.412 | 70.00th=[ 2900], 80.00th=[ 3392], 90.00th=[ 4752], 95.00th=[ 5669], 00:09:11.412 | 99.00th=[ 7046], 99.50th=[ 7570], 99.90th=[ 8291], 99.95th=[ 8586], 00:09:11.412 | 99.99th=[10683] 00:09:11.412 bw ( KiB/s): min=78304, max=92600, per=100.00%, avg=87269.33, stdev=7810.38, samples=3 00:09:11.412 iops : min=19576, max=23150, avg=21817.33, stdev=1952.59, samples=3 00:09:11.412 lat (usec) : 1000=0.05% 00:09:11.412 lat (msec) : 2=1.78%, 4=83.36%, 10=14.79%, 20=0.02% 00:09:11.412 cpu : usr=98.90%, sys=0.20%, ctx=19, majf=0, minf=608 00:09:11.412 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:11.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:11.412 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:11.412 issued rwts: total=43006,42673,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:11.412 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:11.412 00:09:11.412 Run status group 0 (all jobs): 00:09:11.412 READ: bw=84.0MiB/s (88.0MB/s), 84.0MiB/s-84.0MiB/s (88.0MB/s-88.0MB/s), io=168MiB (176MB), run=2001-2001msec 00:09:11.412 WRITE: bw=83.3MiB/s (87.3MB/s), 83.3MiB/s-83.3MiB/s (87.3MB/s-87.3MB/s), io=167MiB (175MB), run=2001-2001msec 00:09:11.412 ----------------------------------------------------- 00:09:11.412 Suppressions used: 00:09:11.412 count bytes template 00:09:11.412 1 32 /usr/src/fio/parse.c 00:09:11.412 1 8 libtcmalloc_minimal.so 00:09:11.412 ----------------------------------------------------- 00:09:11.412 00:09:11.412 18:18:29 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:11.412 18:18:29 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:11.412 18:18:29 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:11.412 18:18:29 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:11.673 18:18:30 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:11.673 18:18:30 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:11.935 18:18:30 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:11.935 18:18:30 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:11.935 18:18:30 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:11.935 18:18:30 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:11.935 18:18:30 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:11.935 18:18:30 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:11.935 18:18:30 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:11.935 18:18:30 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:11.935 18:18:30 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:11.935 18:18:30 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:11.935 18:18:30 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:11.935 18:18:30 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:11.935 18:18:30 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:11.935 18:18:30 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:11.935 18:18:30 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:11.935 18:18:30 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:11.935 18:18:30 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:11.935 18:18:30 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:11.935 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:11.935 fio-3.35 00:09:11.935 Starting 1 thread 00:09:21.926 00:09:21.926 test: (groupid=0, jobs=1): err= 0: pid=64255: Wed Nov 20 18:18:40 2024 00:09:21.926 read: IOPS=22.8k, BW=89.1MiB/s (93.4MB/s)(178MiB/2001msec) 00:09:21.926 slat (usec): min=3, max=133, avg= 4.96, stdev= 2.20 00:09:21.926 clat (usec): min=239, max=10350, avg=2800.42, stdev=873.54 00:09:21.926 lat (usec): min=244, max=10422, avg=2805.38, stdev=874.63 00:09:21.926 clat percentiles (usec): 00:09:21.926 | 1.00th=[ 1827], 5.00th=[ 2114], 10.00th=[ 2212], 20.00th=[ 2311], 00:09:21.926 | 30.00th=[ 2376], 40.00th=[ 2409], 50.00th=[ 2474], 60.00th=[ 2606], 00:09:21.926 | 70.00th=[ 2769], 80.00th=[ 2999], 90.00th=[ 3949], 95.00th=[ 4948], 00:09:21.926 | 99.00th=[ 6325], 99.50th=[ 6783], 99.90th=[ 7373], 99.95th=[ 7701], 00:09:21.926 | 99.99th=[10028] 00:09:21.927 bw ( KiB/s): min=86216, max=98320, per=100.00%, avg=92816.00, stdev=6125.98, samples=3 00:09:21.927 iops : min=21554, max=24580, avg=23204.00, stdev=1531.49, samples=3 00:09:21.927 write: IOPS=22.7k, BW=88.5MiB/s (92.8MB/s)(177MiB/2001msec); 0 zone resets 00:09:21.927 slat (usec): min=3, max=141, avg= 5.10, stdev= 2.18 00:09:21.927 clat (usec): min=207, max=10170, avg=2809.87, stdev=872.98 00:09:21.927 lat (usec): min=211, max=10190, avg=2814.97, stdev=873.97 00:09:21.927 clat percentiles (usec): 00:09:21.927 | 1.00th=[ 1827], 5.00th=[ 2114], 10.00th=[ 2245], 20.00th=[ 2343], 00:09:21.927 | 30.00th=[ 2376], 40.00th=[ 2442], 50.00th=[ 2507], 60.00th=[ 2606], 00:09:21.927 | 70.00th=[ 2769], 80.00th=[ 2999], 90.00th=[ 3949], 95.00th=[ 4948], 00:09:21.927 | 99.00th=[ 6325], 99.50th=[ 6849], 99.90th=[ 7373], 99.95th=[ 7701], 00:09:21.927 | 99.99th=[ 9634] 00:09:21.927 bw ( KiB/s): min=85880, max=97312, per=100.00%, avg=92896.00, stdev=6143.51, samples=3 00:09:21.927 iops : min=21470, max=24328, avg=23224.00, stdev=1535.88, samples=3 00:09:21.927 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.03% 00:09:21.927 lat (msec) : 2=1.95%, 4=88.35%, 10=9.64%, 20=0.01% 00:09:21.927 cpu : usr=99.00%, sys=0.10%, ctx=16, majf=0, minf=605 00:09:21.927 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:21.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:21.927 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:21.927 issued rwts: total=45618,45348,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:21.927 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:21.927 00:09:21.927 Run status group 0 (all jobs): 00:09:21.927 READ: bw=89.1MiB/s (93.4MB/s), 89.1MiB/s-89.1MiB/s (93.4MB/s-93.4MB/s), io=178MiB (187MB), run=2001-2001msec 00:09:21.927 WRITE: bw=88.5MiB/s (92.8MB/s), 88.5MiB/s-88.5MiB/s (92.8MB/s-92.8MB/s), io=177MiB (186MB), run=2001-2001msec 00:09:21.927 ----------------------------------------------------- 00:09:21.927 Suppressions used: 00:09:21.927 count bytes template 00:09:21.927 1 32 /usr/src/fio/parse.c 00:09:21.927 1 8 libtcmalloc_minimal.so 00:09:21.927 ----------------------------------------------------- 00:09:21.927 00:09:21.927 18:18:40 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:21.927 18:18:40 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:09:21.927 00:09:21.927 real 0m30.641s 00:09:21.927 user 0m16.509s 00:09:21.927 sys 0m26.315s 00:09:21.927 ************************************ 00:09:21.927 END TEST nvme_fio 00:09:21.927 ************************************ 00:09:21.927 18:18:40 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:21.927 18:18:40 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:09:22.188 00:09:22.188 real 1m40.385s 00:09:22.188 user 3m37.396s 00:09:22.188 sys 0m37.174s 00:09:22.188 ************************************ 00:09:22.188 END TEST nvme 00:09:22.188 ************************************ 00:09:22.188 18:18:40 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:22.188 18:18:40 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:22.188 18:18:40 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:09:22.188 18:18:40 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:22.188 18:18:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:22.188 18:18:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:22.188 18:18:40 -- common/autotest_common.sh@10 -- # set +x 00:09:22.188 ************************************ 00:09:22.188 START TEST nvme_scc 00:09:22.188 ************************************ 00:09:22.188 18:18:40 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:22.188 * Looking for test storage... 00:09:22.188 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:22.188 18:18:40 nvme_scc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:22.188 18:18:40 nvme_scc -- common/autotest_common.sh@1693 -- # lcov --version 00:09:22.188 18:18:40 nvme_scc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:22.188 18:18:40 nvme_scc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:22.188 18:18:40 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:22.188 18:18:40 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:22.188 18:18:40 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:22.188 18:18:40 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:09:22.188 18:18:40 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:09:22.188 18:18:40 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:09:22.188 18:18:40 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:09:22.188 18:18:40 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:09:22.188 18:18:40 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:09:22.188 18:18:40 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:09:22.188 18:18:40 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:22.188 18:18:40 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:09:22.188 18:18:40 nvme_scc -- scripts/common.sh@345 -- # : 1 00:09:22.188 18:18:40 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:22.188 18:18:40 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:22.188 18:18:40 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:09:22.188 18:18:40 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:09:22.188 18:18:40 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:22.188 18:18:40 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:09:22.188 18:18:40 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:22.188 18:18:40 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:09:22.188 18:18:40 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:09:22.188 18:18:40 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:22.188 18:18:40 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:09:22.188 18:18:40 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:22.188 18:18:40 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:22.188 18:18:40 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:22.188 18:18:40 nvme_scc -- scripts/common.sh@368 -- # return 0 00:09:22.188 18:18:40 nvme_scc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:22.188 18:18:40 nvme_scc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:22.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.188 --rc genhtml_branch_coverage=1 00:09:22.188 --rc genhtml_function_coverage=1 00:09:22.188 --rc genhtml_legend=1 00:09:22.188 --rc geninfo_all_blocks=1 00:09:22.188 --rc geninfo_unexecuted_blocks=1 00:09:22.188 00:09:22.188 ' 00:09:22.188 18:18:40 nvme_scc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:22.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.188 --rc genhtml_branch_coverage=1 00:09:22.188 --rc genhtml_function_coverage=1 00:09:22.188 --rc genhtml_legend=1 00:09:22.188 --rc geninfo_all_blocks=1 00:09:22.188 --rc geninfo_unexecuted_blocks=1 00:09:22.188 00:09:22.188 ' 00:09:22.188 18:18:40 nvme_scc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:22.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.188 --rc genhtml_branch_coverage=1 00:09:22.188 --rc genhtml_function_coverage=1 00:09:22.188 --rc genhtml_legend=1 00:09:22.188 --rc geninfo_all_blocks=1 00:09:22.188 --rc geninfo_unexecuted_blocks=1 00:09:22.188 00:09:22.188 ' 00:09:22.188 18:18:40 nvme_scc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:22.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.188 --rc genhtml_branch_coverage=1 00:09:22.188 --rc genhtml_function_coverage=1 00:09:22.188 --rc genhtml_legend=1 00:09:22.188 --rc geninfo_all_blocks=1 00:09:22.189 --rc geninfo_unexecuted_blocks=1 00:09:22.189 00:09:22.189 ' 00:09:22.189 18:18:40 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:22.189 18:18:40 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:22.189 18:18:40 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:22.189 18:18:40 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:22.189 18:18:40 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:22.189 18:18:40 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:09:22.189 18:18:40 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:22.189 18:18:40 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:22.189 18:18:40 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:22.189 18:18:40 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.189 18:18:40 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.189 18:18:40 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.189 18:18:40 nvme_scc -- paths/export.sh@5 -- # export PATH 00:09:22.189 18:18:40 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.189 18:18:40 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:09:22.189 18:18:40 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:22.189 18:18:40 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:09:22.189 18:18:40 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:22.189 18:18:40 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:09:22.189 18:18:40 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:22.189 18:18:40 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:22.189 18:18:40 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:22.189 18:18:40 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:09:22.189 18:18:40 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:22.189 18:18:40 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:09:22.189 18:18:40 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:09:22.189 18:18:40 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:09:22.189 18:18:40 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:22.450 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:22.711 Waiting for block devices as requested 00:09:22.711 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:22.711 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:22.973 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:22.973 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:28.275 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:28.275 18:18:46 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:09:28.275 18:18:46 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:28.275 18:18:46 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:28.275 18:18:46 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:28.275 18:18:46 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:28.275 18:18:46 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:28.275 18:18:46 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:28.275 18:18:46 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:28.275 18:18:46 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:28.275 18:18:46 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:28.275 18:18:46 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:28.275 18:18:46 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:28.275 18:18:46 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:28.275 18:18:46 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:28.275 18:18:46 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:28.275 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.275 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.275 18:18:46 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:28.275 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:28.275 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.275 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.275 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:28.275 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:28.275 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:28.275 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.275 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.275 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:28.275 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:28.275 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:28.275 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.275 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.275 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:28.275 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:28.275 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:28.275 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.275 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.275 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:28.275 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:28.275 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:28.275 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.275 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.275 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:28.275 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:28.275 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:28.275 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.275 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.275 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:28.275 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:28.275 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:28.275 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.275 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.275 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:28.275 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:28.275 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:28.275 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.275 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.275 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.275 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:28.275 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:28.275 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.275 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.275 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:28.275 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:28.275 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:28.275 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.275 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.275 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.275 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:28.275 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:28.275 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.275 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.275 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:28.275 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:28.275 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:28.275 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.275 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.275 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.275 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:28.275 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:28.275 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.275 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.275 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.276 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.277 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:28.278 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:28.279 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.280 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:28.281 18:18:46 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:28.282 18:18:46 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:28.282 18:18:46 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:28.282 18:18:46 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:28.282 18:18:46 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:28.282 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:28.283 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:28.284 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:09:28.285 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.286 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.287 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:28.288 18:18:46 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:28.288 18:18:46 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:28.288 18:18:46 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:28.288 18:18:46 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:28.288 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.289 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.290 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:09:28.291 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.292 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.293 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.294 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:09:28.295 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.296 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:28.561 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:28.561 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:28.561 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.561 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.561 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:28.561 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:28.561 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:28.561 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.561 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.561 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:28.561 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:28.561 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:28.561 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.561 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.561 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:28.561 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:28.561 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:28.561 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.561 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.561 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:28.561 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:28.561 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:28.561 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.561 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.561 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.561 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:28.561 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:28.561 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.561 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.561 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:28.561 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:28.561 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:28.561 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.561 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.561 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.561 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:28.561 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:28.561 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.561 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.561 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.561 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:28.561 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:28.561 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.561 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.561 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.561 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:28.561 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:28.561 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.561 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.562 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:28.563 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:28.564 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.565 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:28.566 18:18:46 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:28.566 18:18:46 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:28.566 18:18:46 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:28.566 18:18:46 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.566 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:28.567 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.568 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.569 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:28.569 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:28.569 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.569 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.569 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.569 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:28.569 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:28.569 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.569 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.569 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:28.569 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:28.569 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:28.569 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.569 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.569 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.569 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:28.569 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:28.569 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.569 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.569 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.569 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:28.569 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:28.569 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.569 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.569 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.569 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:28.569 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:28.569 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.569 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.569 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.569 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:28.569 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:28.569 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.569 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.569 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.569 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:28.569 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:28.569 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.569 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.569 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.569 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:28.569 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:28.569 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.569 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.569 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:28.569 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:28.569 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:28.569 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.569 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.569 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:28.569 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:28.569 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:28.569 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.569 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.569 18:18:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:28.569 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:28.569 18:18:46 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:28.569 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.569 18:18:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.569 18:18:46 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:28.569 18:18:46 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:28.569 18:18:46 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:28.569 18:18:46 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:28.569 18:18:46 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:28.569 18:18:46 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:28.569 18:18:47 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:09:28.569 18:18:47 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:09:28.569 18:18:47 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:28.569 18:18:47 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:09:28.569 18:18:47 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:28.569 18:18:47 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:09:28.569 18:18:47 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:09:28.569 18:18:47 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:28.569 18:18:47 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:28.569 18:18:47 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:09:28.569 18:18:47 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:09:28.569 18:18:47 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:09:28.569 18:18:47 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:09:28.569 18:18:47 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:09:28.569 18:18:47 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:09:28.569 18:18:47 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:28.569 18:18:47 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:28.569 18:18:47 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:28.569 18:18:47 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:28.569 18:18:47 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:28.569 18:18:47 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:28.569 18:18:47 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:09:28.569 18:18:47 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:28.569 18:18:47 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:09:28.569 18:18:47 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:09:28.569 18:18:47 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:09:28.569 18:18:47 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:09:28.569 18:18:47 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:09:28.569 18:18:47 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:09:28.569 18:18:47 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:28.569 18:18:47 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:28.569 18:18:47 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:28.569 18:18:47 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:28.569 18:18:47 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:28.569 18:18:47 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:28.569 18:18:47 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:09:28.569 18:18:47 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:28.569 18:18:47 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:09:28.569 18:18:47 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:09:28.569 18:18:47 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:09:28.569 18:18:47 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:09:28.569 18:18:47 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:09:28.569 18:18:47 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:09:28.569 18:18:47 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:28.569 18:18:47 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:28.569 18:18:47 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:28.569 18:18:47 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:28.569 18:18:47 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:28.569 18:18:47 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:28.569 18:18:47 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:09:28.569 18:18:47 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:28.569 18:18:47 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:09:28.569 18:18:47 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:09:28.569 18:18:47 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:09:28.569 18:18:47 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:09:28.569 18:18:47 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:09:28.569 18:18:47 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:09:28.569 18:18:47 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:28.569 18:18:47 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:28.569 18:18:47 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:28.569 18:18:47 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:28.569 18:18:47 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:28.569 18:18:47 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:28.569 18:18:47 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:09:28.569 18:18:47 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:09:28.569 18:18:47 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:09:28.569 18:18:47 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:09:28.569 18:18:47 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:09:28.569 18:18:47 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:09:28.569 18:18:47 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:29.142 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:29.715 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:29.715 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:29.715 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:29.715 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:29.715 18:18:48 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:29.715 18:18:48 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:29.715 18:18:48 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:29.715 18:18:48 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:29.715 ************************************ 00:09:29.715 START TEST nvme_simple_copy 00:09:29.715 ************************************ 00:09:29.715 18:18:48 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:29.975 Initializing NVMe Controllers 00:09:29.975 Attaching to 0000:00:10.0 00:09:29.975 Controller supports SCC. Attached to 0000:00:10.0 00:09:29.975 Namespace ID: 1 size: 6GB 00:09:29.975 Initialization complete. 00:09:29.975 00:09:29.975 Controller QEMU NVMe Ctrl (12340 ) 00:09:29.975 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:09:29.975 Namespace Block Size:4096 00:09:29.975 Writing LBAs 0 to 63 with Random Data 00:09:29.975 Copied LBAs from 0 - 63 to the Destination LBA 256 00:09:29.975 LBAs matching Written Data: 64 00:09:29.975 00:09:29.975 real 0m0.287s 00:09:29.975 user 0m0.123s 00:09:29.975 sys 0m0.062s 00:09:29.975 ************************************ 00:09:29.975 END TEST nvme_simple_copy 00:09:29.975 ************************************ 00:09:29.975 18:18:48 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:29.975 18:18:48 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:09:29.975 ************************************ 00:09:29.975 END TEST nvme_scc 00:09:29.975 ************************************ 00:09:29.975 00:09:29.975 real 0m7.953s 00:09:29.975 user 0m1.158s 00:09:29.975 sys 0m1.452s 00:09:29.975 18:18:48 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:29.975 18:18:48 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:30.237 18:18:48 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:09:30.238 18:18:48 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:09:30.238 18:18:48 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:09:30.238 18:18:48 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:09:30.238 18:18:48 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:09:30.238 18:18:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:30.238 18:18:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:30.238 18:18:48 -- common/autotest_common.sh@10 -- # set +x 00:09:30.238 ************************************ 00:09:30.238 START TEST nvme_fdp 00:09:30.238 ************************************ 00:09:30.238 18:18:48 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:09:30.238 * Looking for test storage... 00:09:30.238 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:30.238 18:18:48 nvme_fdp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:30.238 18:18:48 nvme_fdp -- common/autotest_common.sh@1693 -- # lcov --version 00:09:30.238 18:18:48 nvme_fdp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:30.238 18:18:48 nvme_fdp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:30.238 18:18:48 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:30.238 18:18:48 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:30.238 18:18:48 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:30.238 18:18:48 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:09:30.238 18:18:48 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:09:30.238 18:18:48 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:09:30.238 18:18:48 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:09:30.238 18:18:48 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:09:30.238 18:18:48 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:09:30.238 18:18:48 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:09:30.238 18:18:48 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:30.238 18:18:48 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:09:30.238 18:18:48 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:09:30.238 18:18:48 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:30.238 18:18:48 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:30.238 18:18:48 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:09:30.238 18:18:48 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:09:30.238 18:18:48 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:30.238 18:18:48 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:09:30.238 18:18:48 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:30.238 18:18:48 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:09:30.238 18:18:48 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:09:30.238 18:18:48 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:30.238 18:18:48 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:09:30.238 18:18:48 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:30.238 18:18:48 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:30.238 18:18:48 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:30.238 18:18:48 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:09:30.238 18:18:48 nvme_fdp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:30.238 18:18:48 nvme_fdp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:30.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.238 --rc genhtml_branch_coverage=1 00:09:30.238 --rc genhtml_function_coverage=1 00:09:30.238 --rc genhtml_legend=1 00:09:30.238 --rc geninfo_all_blocks=1 00:09:30.238 --rc geninfo_unexecuted_blocks=1 00:09:30.238 00:09:30.238 ' 00:09:30.238 18:18:48 nvme_fdp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:30.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.238 --rc genhtml_branch_coverage=1 00:09:30.238 --rc genhtml_function_coverage=1 00:09:30.238 --rc genhtml_legend=1 00:09:30.238 --rc geninfo_all_blocks=1 00:09:30.238 --rc geninfo_unexecuted_blocks=1 00:09:30.238 00:09:30.238 ' 00:09:30.238 18:18:48 nvme_fdp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:30.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.238 --rc genhtml_branch_coverage=1 00:09:30.238 --rc genhtml_function_coverage=1 00:09:30.238 --rc genhtml_legend=1 00:09:30.238 --rc geninfo_all_blocks=1 00:09:30.238 --rc geninfo_unexecuted_blocks=1 00:09:30.238 00:09:30.238 ' 00:09:30.238 18:18:48 nvme_fdp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:30.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.238 --rc genhtml_branch_coverage=1 00:09:30.238 --rc genhtml_function_coverage=1 00:09:30.238 --rc genhtml_legend=1 00:09:30.238 --rc geninfo_all_blocks=1 00:09:30.238 --rc geninfo_unexecuted_blocks=1 00:09:30.238 00:09:30.238 ' 00:09:30.238 18:18:48 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:30.238 18:18:48 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:30.238 18:18:48 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:30.238 18:18:48 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:30.238 18:18:48 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:30.238 18:18:48 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:09:30.238 18:18:48 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:30.238 18:18:48 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:30.238 18:18:48 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:30.238 18:18:48 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.238 18:18:48 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.238 18:18:48 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.238 18:18:48 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:09:30.238 18:18:48 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.238 18:18:48 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:09:30.238 18:18:48 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:30.238 18:18:48 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:09:30.238 18:18:48 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:30.238 18:18:48 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:09:30.238 18:18:48 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:30.238 18:18:48 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:30.238 18:18:48 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:30.238 18:18:48 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:09:30.238 18:18:48 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:30.238 18:18:48 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:30.500 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:30.760 Waiting for block devices as requested 00:09:30.760 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:30.760 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:31.021 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:31.021 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:36.326 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:36.326 18:18:54 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:36.326 18:18:54 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:36.326 18:18:54 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:36.326 18:18:54 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:36.326 18:18:54 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:36.326 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:36.327 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:36.328 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:09:36.329 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:36.330 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.331 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:36.332 18:18:54 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:36.332 18:18:54 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:36.332 18:18:54 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:36.332 18:18:54 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:36.332 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:36.333 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.334 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.335 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.336 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:36.337 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:36.338 18:18:54 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:36.338 18:18:54 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:36.338 18:18:54 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:36.338 18:18:54 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:36.338 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.339 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:36.340 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.341 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:09:36.342 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.343 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.344 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.344 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:09:36.344 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:09:36.344 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.344 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.344 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.344 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:09:36.344 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:09:36.344 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.344 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.344 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.344 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:09:36.344 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:09:36.344 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.344 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.344 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.344 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:09:36.344 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:09:36.344 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.344 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.344 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.344 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:09:36.344 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:09:36.344 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.344 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.344 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.344 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:09:36.344 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:09:36.344 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.344 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.344 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.344 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:09:36.344 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:09:36.344 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.344 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.344 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.344 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:09:36.344 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:09:36.344 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.344 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.344 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.344 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:09:36.344 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:09:36.344 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.344 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.344 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.344 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:09:36.344 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:09:36.344 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.344 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.344 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:36.344 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:09:36.344 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:09:36.344 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.344 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.344 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.344 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:09:36.344 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:09:36.344 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.344 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.344 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.344 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:09:36.344 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.609 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.610 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:36.611 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.612 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.612 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.612 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:36.612 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:36.612 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.612 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.612 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.612 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:36.612 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:36.612 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.612 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.612 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:36.612 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:36.612 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:36.612 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.612 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.612 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.612 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:36.612 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:36.612 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.612 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.612 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.612 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:36.612 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:36.612 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.612 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.612 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.612 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:36.612 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:36.612 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.612 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.612 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.612 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:36.612 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:36.612 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.612 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.612 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.612 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:36.612 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:36.612 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.612 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.612 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:36.612 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:36.612 18:18:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:36.612 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.612 18:18:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.612 18:18:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.612 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.613 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:36.614 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:36.615 18:18:55 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:36.615 18:18:55 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:36.615 18:18:55 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:36.616 18:18:55 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:36.616 18:18:55 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.616 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.617 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:36.618 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.619 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.619 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.619 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:36.619 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:36.619 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.619 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.619 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:36.619 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:36.619 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:36.619 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.619 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.619 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:36.619 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:36.619 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:36.619 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.619 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.619 18:18:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:36.619 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:36.619 18:18:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:36.619 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.619 18:18:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.619 18:18:55 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:36.619 18:18:55 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:36.619 18:18:55 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:36.619 18:18:55 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:36.619 18:18:55 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:36.619 18:18:55 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:36.619 18:18:55 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:09:36.619 18:18:55 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:09:36.619 18:18:55 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:36.619 18:18:55 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:09:36.619 18:18:55 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:36.619 18:18:55 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:09:36.619 18:18:55 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:09:36.619 18:18:55 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:36.619 18:18:55 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:36.619 18:18:55 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:09:36.619 18:18:55 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:09:36.619 18:18:55 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:09:36.619 18:18:55 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:09:36.619 18:18:55 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:09:36.619 18:18:55 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:09:36.619 18:18:55 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:36.619 18:18:55 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:36.619 18:18:55 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:36.619 18:18:55 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:36.619 18:18:55 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:36.619 18:18:55 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:36.619 18:18:55 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:36.619 18:18:55 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:09:36.619 18:18:55 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:09:36.619 18:18:55 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:09:36.619 18:18:55 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:09:36.619 18:18:55 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:09:36.619 18:18:55 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:09:36.619 18:18:55 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:36.619 18:18:55 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:36.619 18:18:55 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:36.619 18:18:55 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:36.619 18:18:55 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:36.619 18:18:55 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:36.619 18:18:55 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:36.619 18:18:55 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:09:36.619 18:18:55 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:09:36.619 18:18:55 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:09:36.619 18:18:55 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:09:36.619 18:18:55 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:09:36.619 18:18:55 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:09:36.619 18:18:55 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:36.619 18:18:55 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:36.619 18:18:55 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:09:36.619 18:18:55 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:09:36.619 18:18:55 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:09:36.619 18:18:55 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:36.619 18:18:55 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:09:36.619 18:18:55 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:36.619 18:18:55 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:09:36.619 18:18:55 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:09:36.619 18:18:55 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:09:36.619 18:18:55 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:09:36.619 18:18:55 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:09:36.619 18:18:55 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:09:36.619 18:18:55 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:36.619 18:18:55 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:36.619 18:18:55 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:36.619 18:18:55 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:36.619 18:18:55 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:36.619 18:18:55 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:36.619 18:18:55 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:09:36.619 18:18:55 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:09:36.619 18:18:55 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:09:36.619 18:18:55 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:09:36.619 18:18:55 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:09:36.619 18:18:55 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:37.193 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:37.767 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:37.767 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:37.767 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:37.767 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:37.767 18:18:56 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:37.767 18:18:56 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:37.767 18:18:56 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:37.767 18:18:56 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:37.767 ************************************ 00:09:37.767 START TEST nvme_flexible_data_placement 00:09:37.767 ************************************ 00:09:37.767 18:18:56 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:38.029 Initializing NVMe Controllers 00:09:38.029 Attaching to 0000:00:13.0 00:09:38.029 Controller supports FDP Attached to 0000:00:13.0 00:09:38.029 Namespace ID: 1 Endurance Group ID: 1 00:09:38.029 Initialization complete. 00:09:38.029 00:09:38.029 ================================== 00:09:38.029 == FDP tests for Namespace: #01 == 00:09:38.029 ================================== 00:09:38.029 00:09:38.029 Get Feature: FDP: 00:09:38.029 ================= 00:09:38.029 Enabled: Yes 00:09:38.029 FDP configuration Index: 0 00:09:38.029 00:09:38.029 FDP configurations log page 00:09:38.029 =========================== 00:09:38.029 Number of FDP configurations: 1 00:09:38.029 Version: 0 00:09:38.029 Size: 112 00:09:38.029 FDP Configuration Descriptor: 0 00:09:38.029 Descriptor Size: 96 00:09:38.029 Reclaim Group Identifier format: 2 00:09:38.029 FDP Volatile Write Cache: Not Present 00:09:38.029 FDP Configuration: Valid 00:09:38.029 Vendor Specific Size: 0 00:09:38.029 Number of Reclaim Groups: 2 00:09:38.029 Number of Recalim Unit Handles: 8 00:09:38.029 Max Placement Identifiers: 128 00:09:38.029 Number of Namespaces Suppprted: 256 00:09:38.029 Reclaim unit Nominal Size: 6000000 bytes 00:09:38.029 Estimated Reclaim Unit Time Limit: Not Reported 00:09:38.029 RUH Desc #000: RUH Type: Initially Isolated 00:09:38.029 RUH Desc #001: RUH Type: Initially Isolated 00:09:38.029 RUH Desc #002: RUH Type: Initially Isolated 00:09:38.029 RUH Desc #003: RUH Type: Initially Isolated 00:09:38.029 RUH Desc #004: RUH Type: Initially Isolated 00:09:38.029 RUH Desc #005: RUH Type: Initially Isolated 00:09:38.029 RUH Desc #006: RUH Type: Initially Isolated 00:09:38.029 RUH Desc #007: RUH Type: Initially Isolated 00:09:38.029 00:09:38.029 FDP reclaim unit handle usage log page 00:09:38.029 ====================================== 00:09:38.029 Number of Reclaim Unit Handles: 8 00:09:38.029 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:38.029 RUH Usage Desc #001: RUH Attributes: Unused 00:09:38.030 RUH Usage Desc #002: RUH Attributes: Unused 00:09:38.030 RUH Usage Desc #003: RUH Attributes: Unused 00:09:38.030 RUH Usage Desc #004: RUH Attributes: Unused 00:09:38.030 RUH Usage Desc #005: RUH Attributes: Unused 00:09:38.030 RUH Usage Desc #006: RUH Attributes: Unused 00:09:38.030 RUH Usage Desc #007: RUH Attributes: Unused 00:09:38.030 00:09:38.030 FDP statistics log page 00:09:38.030 ======================= 00:09:38.030 Host bytes with metadata written: 954200064 00:09:38.030 Media bytes with metadata written: 954298368 00:09:38.030 Media bytes erased: 0 00:09:38.030 00:09:38.030 FDP Reclaim unit handle status 00:09:38.030 ============================== 00:09:38.030 Number of RUHS descriptors: 2 00:09:38.030 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000003201 00:09:38.030 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:09:38.030 00:09:38.030 FDP write on placement id: 0 success 00:09:38.030 00:09:38.030 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:09:38.030 00:09:38.030 IO mgmt send: RUH update for Placement ID: #0 Success 00:09:38.030 00:09:38.030 Get Feature: FDP Events for Placement handle: #0 00:09:38.030 ======================== 00:09:38.030 Number of FDP Events: 6 00:09:38.030 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:09:38.030 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:09:38.030 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:09:38.030 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:09:38.030 FDP Event: #4 Type: Media Reallocated Enabled: No 00:09:38.030 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:09:38.030 00:09:38.030 FDP events log page 00:09:38.030 =================== 00:09:38.030 Number of FDP events: 1 00:09:38.030 FDP Event #0: 00:09:38.030 Event Type: RU Not Written to Capacity 00:09:38.030 Placement Identifier: Valid 00:09:38.030 NSID: Valid 00:09:38.030 Location: Valid 00:09:38.030 Placement Identifier: 0 00:09:38.030 Event Timestamp: 7 00:09:38.030 Namespace Identifier: 1 00:09:38.030 Reclaim Group Identifier: 0 00:09:38.030 Reclaim Unit Handle Identifier: 0 00:09:38.030 00:09:38.030 FDP test passed 00:09:38.030 00:09:38.030 real 0m0.252s 00:09:38.030 user 0m0.078s 00:09:38.030 sys 0m0.072s 00:09:38.030 ************************************ 00:09:38.030 END TEST nvme_flexible_data_placement 00:09:38.030 ************************************ 00:09:38.030 18:18:56 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:38.030 18:18:56 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:09:38.030 ************************************ 00:09:38.030 END TEST nvme_fdp 00:09:38.030 ************************************ 00:09:38.030 00:09:38.030 real 0m7.997s 00:09:38.030 user 0m1.147s 00:09:38.030 sys 0m1.494s 00:09:38.030 18:18:56 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:38.030 18:18:56 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:38.292 18:18:56 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:09:38.292 18:18:56 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:38.292 18:18:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:38.292 18:18:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:38.292 18:18:56 -- common/autotest_common.sh@10 -- # set +x 00:09:38.292 ************************************ 00:09:38.292 START TEST nvme_rpc 00:09:38.292 ************************************ 00:09:38.292 18:18:56 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:38.292 * Looking for test storage... 00:09:38.292 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:38.292 18:18:56 nvme_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:38.292 18:18:56 nvme_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:09:38.292 18:18:56 nvme_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:38.292 18:18:56 nvme_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:38.292 18:18:56 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:38.292 18:18:56 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:38.292 18:18:56 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:38.292 18:18:56 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:38.292 18:18:56 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:38.292 18:18:56 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:38.292 18:18:56 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:38.292 18:18:56 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:38.292 18:18:56 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:38.292 18:18:56 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:38.292 18:18:56 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:38.292 18:18:56 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:38.292 18:18:56 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:09:38.292 18:18:56 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:38.293 18:18:56 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:38.293 18:18:56 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:38.293 18:18:56 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:09:38.293 18:18:56 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:38.293 18:18:56 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:09:38.293 18:18:56 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:38.293 18:18:56 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:38.293 18:18:56 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:09:38.293 18:18:56 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:38.293 18:18:56 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:09:38.293 18:18:56 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:38.293 18:18:56 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:38.293 18:18:56 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:38.293 18:18:56 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:09:38.293 18:18:56 nvme_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:38.293 18:18:56 nvme_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:38.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.293 --rc genhtml_branch_coverage=1 00:09:38.293 --rc genhtml_function_coverage=1 00:09:38.293 --rc genhtml_legend=1 00:09:38.293 --rc geninfo_all_blocks=1 00:09:38.293 --rc geninfo_unexecuted_blocks=1 00:09:38.293 00:09:38.293 ' 00:09:38.293 18:18:56 nvme_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:38.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.293 --rc genhtml_branch_coverage=1 00:09:38.293 --rc genhtml_function_coverage=1 00:09:38.293 --rc genhtml_legend=1 00:09:38.293 --rc geninfo_all_blocks=1 00:09:38.293 --rc geninfo_unexecuted_blocks=1 00:09:38.293 00:09:38.293 ' 00:09:38.293 18:18:56 nvme_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:38.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.293 --rc genhtml_branch_coverage=1 00:09:38.293 --rc genhtml_function_coverage=1 00:09:38.293 --rc genhtml_legend=1 00:09:38.293 --rc geninfo_all_blocks=1 00:09:38.293 --rc geninfo_unexecuted_blocks=1 00:09:38.293 00:09:38.293 ' 00:09:38.293 18:18:56 nvme_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:38.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.293 --rc genhtml_branch_coverage=1 00:09:38.293 --rc genhtml_function_coverage=1 00:09:38.293 --rc genhtml_legend=1 00:09:38.293 --rc geninfo_all_blocks=1 00:09:38.293 --rc geninfo_unexecuted_blocks=1 00:09:38.293 00:09:38.293 ' 00:09:38.293 18:18:56 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:38.293 18:18:56 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:09:38.293 18:18:56 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:09:38.293 18:18:56 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:09:38.293 18:18:56 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:09:38.293 18:18:56 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:09:38.293 18:18:56 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:38.293 18:18:56 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:09:38.293 18:18:56 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:38.293 18:18:56 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:38.293 18:18:56 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:38.293 18:18:56 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:38.293 18:18:56 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:38.293 18:18:56 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:09:38.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:38.293 18:18:56 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:09:38.293 18:18:56 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=65643 00:09:38.293 18:18:56 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:38.293 18:18:56 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:09:38.293 18:18:56 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 65643 00:09:38.293 18:18:56 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 65643 ']' 00:09:38.293 18:18:56 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:38.293 18:18:56 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:38.293 18:18:56 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:38.293 18:18:56 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:38.293 18:18:56 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:38.555 [2024-11-20 18:18:56.979810] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:09:38.555 [2024-11-20 18:18:56.979962] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65643 ] 00:09:38.555 [2024-11-20 18:18:57.146014] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:38.818 [2024-11-20 18:18:57.266310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:38.818 [2024-11-20 18:18:57.266388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.392 18:18:57 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:39.392 18:18:57 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:39.392 18:18:57 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:09:39.654 Nvme0n1 00:09:39.654 18:18:58 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:09:39.654 18:18:58 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:09:39.916 request: 00:09:39.916 { 00:09:39.916 "bdev_name": "Nvme0n1", 00:09:39.916 "filename": "non_existing_file", 00:09:39.916 "method": "bdev_nvme_apply_firmware", 00:09:39.916 "req_id": 1 00:09:39.916 } 00:09:39.916 Got JSON-RPC error response 00:09:39.916 response: 00:09:39.916 { 00:09:39.916 "code": -32603, 00:09:39.916 "message": "open file failed." 00:09:39.916 } 00:09:39.916 18:18:58 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:09:39.917 18:18:58 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:09:39.917 18:18:58 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:09:40.178 18:18:58 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:40.178 18:18:58 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 65643 00:09:40.178 18:18:58 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 65643 ']' 00:09:40.178 18:18:58 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 65643 00:09:40.178 18:18:58 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:09:40.178 18:18:58 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:40.178 18:18:58 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65643 00:09:40.178 killing process with pid 65643 00:09:40.178 18:18:58 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:40.178 18:18:58 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:40.178 18:18:58 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65643' 00:09:40.178 18:18:58 nvme_rpc -- common/autotest_common.sh@973 -- # kill 65643 00:09:40.178 18:18:58 nvme_rpc -- common/autotest_common.sh@978 -- # wait 65643 00:09:41.649 ************************************ 00:09:41.649 END TEST nvme_rpc 00:09:41.649 ************************************ 00:09:41.649 00:09:41.649 real 0m3.553s 00:09:41.649 user 0m6.675s 00:09:41.649 sys 0m0.591s 00:09:41.649 18:19:00 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:41.649 18:19:00 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:41.911 18:19:00 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:41.911 18:19:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:41.911 18:19:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:41.911 18:19:00 -- common/autotest_common.sh@10 -- # set +x 00:09:41.911 ************************************ 00:09:41.911 START TEST nvme_rpc_timeouts 00:09:41.911 ************************************ 00:09:41.911 18:19:00 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:41.911 * Looking for test storage... 00:09:41.911 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:41.911 18:19:00 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:41.911 18:19:00 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lcov --version 00:09:41.911 18:19:00 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:41.911 18:19:00 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:41.911 18:19:00 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:41.911 18:19:00 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:41.911 18:19:00 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:41.911 18:19:00 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:09:41.911 18:19:00 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:09:41.911 18:19:00 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:09:41.911 18:19:00 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:09:41.911 18:19:00 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:09:41.911 18:19:00 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:09:41.911 18:19:00 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:09:41.911 18:19:00 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:41.911 18:19:00 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:09:41.911 18:19:00 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:09:41.911 18:19:00 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:41.911 18:19:00 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:41.911 18:19:00 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:09:41.911 18:19:00 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:09:41.911 18:19:00 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:41.911 18:19:00 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:09:41.911 18:19:00 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:09:41.911 18:19:00 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:09:41.911 18:19:00 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:09:41.911 18:19:00 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:41.911 18:19:00 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:09:41.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:41.911 18:19:00 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:09:41.911 18:19:00 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:41.911 18:19:00 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:41.911 18:19:00 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:09:41.911 18:19:00 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:41.911 18:19:00 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:41.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.911 --rc genhtml_branch_coverage=1 00:09:41.911 --rc genhtml_function_coverage=1 00:09:41.911 --rc genhtml_legend=1 00:09:41.911 --rc geninfo_all_blocks=1 00:09:41.911 --rc geninfo_unexecuted_blocks=1 00:09:41.911 00:09:41.911 ' 00:09:41.911 18:19:00 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:41.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.911 --rc genhtml_branch_coverage=1 00:09:41.911 --rc genhtml_function_coverage=1 00:09:41.911 --rc genhtml_legend=1 00:09:41.911 --rc geninfo_all_blocks=1 00:09:41.911 --rc geninfo_unexecuted_blocks=1 00:09:41.911 00:09:41.911 ' 00:09:41.911 18:19:00 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:41.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.911 --rc genhtml_branch_coverage=1 00:09:41.911 --rc genhtml_function_coverage=1 00:09:41.911 --rc genhtml_legend=1 00:09:41.911 --rc geninfo_all_blocks=1 00:09:41.911 --rc geninfo_unexecuted_blocks=1 00:09:41.911 00:09:41.911 ' 00:09:41.911 18:19:00 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:41.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.911 --rc genhtml_branch_coverage=1 00:09:41.911 --rc genhtml_function_coverage=1 00:09:41.911 --rc genhtml_legend=1 00:09:41.911 --rc geninfo_all_blocks=1 00:09:41.911 --rc geninfo_unexecuted_blocks=1 00:09:41.911 00:09:41.911 ' 00:09:41.911 18:19:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:41.911 18:19:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_65708 00:09:41.911 18:19:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_65708 00:09:41.911 18:19:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=65740 00:09:41.912 18:19:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:09:41.912 18:19:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 65740 00:09:41.912 18:19:00 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 65740 ']' 00:09:41.912 18:19:00 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:41.912 18:19:00 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:41.912 18:19:00 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:41.912 18:19:00 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:41.912 18:19:00 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:41.912 18:19:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:42.174 [2024-11-20 18:19:00.543765] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:09:42.174 [2024-11-20 18:19:00.543908] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65740 ] 00:09:42.174 [2024-11-20 18:19:00.707323] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:42.435 [2024-11-20 18:19:00.836138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.435 [2024-11-20 18:19:00.836175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:43.009 Checking default timeout settings: 00:09:43.009 18:19:01 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:43.009 18:19:01 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:09:43.009 18:19:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:09:43.009 18:19:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:43.270 Making settings changes with rpc: 00:09:43.270 18:19:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:09:43.270 18:19:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:09:43.532 Check default vs. modified settings: 00:09:43.532 18:19:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:09:43.532 18:19:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:43.794 18:19:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:09:43.794 18:19:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:43.794 18:19:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_65708 00:09:43.794 18:19:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:43.794 18:19:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:44.056 18:19:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:09:44.056 18:19:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:44.056 18:19:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_65708 00:09:44.056 18:19:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:44.056 Setting action_on_timeout is changed as expected. 00:09:44.056 18:19:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:09:44.056 18:19:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:09:44.056 18:19:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:09:44.056 18:19:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:44.056 18:19:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_65708 00:09:44.056 18:19:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:44.056 18:19:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:44.056 18:19:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:44.056 18:19:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:44.056 18:19:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_65708 00:09:44.056 18:19:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:44.056 Setting timeout_us is changed as expected. 00:09:44.056 18:19:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:09:44.056 18:19:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:09:44.056 18:19:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:09:44.056 18:19:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:44.056 18:19:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_65708 00:09:44.056 18:19:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:44.056 18:19:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:44.056 18:19:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:44.056 18:19:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:44.056 18:19:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_65708 00:09:44.056 18:19:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:44.056 Setting timeout_admin_us is changed as expected. 00:09:44.056 18:19:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:09:44.056 18:19:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:09:44.056 18:19:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:09:44.056 18:19:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:09:44.056 18:19:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_65708 /tmp/settings_modified_65708 00:09:44.056 18:19:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 65740 00:09:44.056 18:19:02 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 65740 ']' 00:09:44.056 18:19:02 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 65740 00:09:44.056 18:19:02 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:09:44.056 18:19:02 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:44.056 18:19:02 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65740 00:09:44.056 killing process with pid 65740 00:09:44.056 18:19:02 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:44.056 18:19:02 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:44.056 18:19:02 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65740' 00:09:44.056 18:19:02 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 65740 00:09:44.056 18:19:02 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 65740 00:09:45.435 RPC TIMEOUT SETTING TEST PASSED. 00:09:45.435 18:19:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:09:45.435 ************************************ 00:09:45.435 END TEST nvme_rpc_timeouts 00:09:45.435 ************************************ 00:09:45.435 00:09:45.435 real 0m3.518s 00:09:45.435 user 0m6.754s 00:09:45.435 sys 0m0.624s 00:09:45.435 18:19:03 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:45.435 18:19:03 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:45.435 18:19:03 -- spdk/autotest.sh@239 -- # uname -s 00:09:45.435 18:19:03 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:09:45.435 18:19:03 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:45.435 18:19:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:45.435 18:19:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:45.435 18:19:03 -- common/autotest_common.sh@10 -- # set +x 00:09:45.435 ************************************ 00:09:45.435 START TEST sw_hotplug 00:09:45.435 ************************************ 00:09:45.435 18:19:03 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:45.435 * Looking for test storage... 00:09:45.435 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:45.435 18:19:03 sw_hotplug -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:45.435 18:19:03 sw_hotplug -- common/autotest_common.sh@1693 -- # lcov --version 00:09:45.435 18:19:03 sw_hotplug -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:45.435 18:19:04 sw_hotplug -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:45.435 18:19:04 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:45.435 18:19:04 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:45.435 18:19:04 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:45.435 18:19:04 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:09:45.435 18:19:04 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:09:45.435 18:19:04 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:09:45.435 18:19:04 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:09:45.435 18:19:04 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:09:45.435 18:19:04 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:09:45.435 18:19:04 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:09:45.435 18:19:04 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:45.435 18:19:04 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:09:45.435 18:19:04 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:09:45.435 18:19:04 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:45.435 18:19:04 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:45.435 18:19:04 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:09:45.435 18:19:04 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:09:45.435 18:19:04 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:45.435 18:19:04 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:09:45.435 18:19:04 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:09:45.435 18:19:04 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:09:45.435 18:19:04 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:09:45.435 18:19:04 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:45.435 18:19:04 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:09:45.435 18:19:04 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:09:45.435 18:19:04 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:45.435 18:19:04 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:45.435 18:19:04 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:09:45.435 18:19:04 sw_hotplug -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:45.435 18:19:04 sw_hotplug -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:45.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.435 --rc genhtml_branch_coverage=1 00:09:45.435 --rc genhtml_function_coverage=1 00:09:45.435 --rc genhtml_legend=1 00:09:45.435 --rc geninfo_all_blocks=1 00:09:45.435 --rc geninfo_unexecuted_blocks=1 00:09:45.435 00:09:45.435 ' 00:09:45.435 18:19:04 sw_hotplug -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:45.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.435 --rc genhtml_branch_coverage=1 00:09:45.435 --rc genhtml_function_coverage=1 00:09:45.435 --rc genhtml_legend=1 00:09:45.435 --rc geninfo_all_blocks=1 00:09:45.435 --rc geninfo_unexecuted_blocks=1 00:09:45.435 00:09:45.435 ' 00:09:45.435 18:19:04 sw_hotplug -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:45.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.435 --rc genhtml_branch_coverage=1 00:09:45.435 --rc genhtml_function_coverage=1 00:09:45.435 --rc genhtml_legend=1 00:09:45.435 --rc geninfo_all_blocks=1 00:09:45.435 --rc geninfo_unexecuted_blocks=1 00:09:45.435 00:09:45.435 ' 00:09:45.435 18:19:04 sw_hotplug -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:45.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.435 --rc genhtml_branch_coverage=1 00:09:45.435 --rc genhtml_function_coverage=1 00:09:45.435 --rc genhtml_legend=1 00:09:45.435 --rc geninfo_all_blocks=1 00:09:45.435 --rc geninfo_unexecuted_blocks=1 00:09:45.435 00:09:45.435 ' 00:09:45.435 18:19:04 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:45.697 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:45.959 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:45.959 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:45.959 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:45.959 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:45.959 18:19:04 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:09:45.959 18:19:04 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:09:45.959 18:19:04 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:09:45.959 18:19:04 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:09:45.959 18:19:04 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:09:45.959 18:19:04 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:09:45.959 18:19:04 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:09:45.959 18:19:04 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:09:45.959 18:19:04 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:09:45.959 18:19:04 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:09:45.959 18:19:04 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:09:45.959 18:19:04 sw_hotplug -- scripts/common.sh@233 -- # local class 00:09:45.959 18:19:04 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:09:45.959 18:19:04 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:09:45.959 18:19:04 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:09:45.959 18:19:04 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:09:45.959 18:19:04 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:09:45.959 18:19:04 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:09:45.959 18:19:04 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:09:45.959 18:19:04 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:09:45.959 18:19:04 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:09:45.959 18:19:04 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:09:45.959 18:19:04 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:09:45.959 18:19:04 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:09:45.959 18:19:04 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:09:45.959 18:19:04 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:09:45.959 18:19:04 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:45.959 18:19:04 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:09:45.959 18:19:04 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:45.959 18:19:04 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:45.959 18:19:04 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:45.959 18:19:04 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:45.959 18:19:04 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:09:45.959 18:19:04 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:45.959 18:19:04 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:09:45.959 18:19:04 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:45.959 18:19:04 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:45.959 18:19:04 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:45.959 18:19:04 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:45.959 18:19:04 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:09:45.959 18:19:04 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:45.959 18:19:04 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:09:45.959 18:19:04 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:45.959 18:19:04 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:45.959 18:19:04 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:45.959 18:19:04 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:45.959 18:19:04 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:09:45.959 18:19:04 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:45.959 18:19:04 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:09:45.959 18:19:04 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:45.959 18:19:04 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:45.959 18:19:04 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:45.959 18:19:04 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:45.959 18:19:04 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:09:45.959 18:19:04 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:45.959 18:19:04 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:09:45.959 18:19:04 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:45.959 18:19:04 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:45.959 18:19:04 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:45.959 18:19:04 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:45.959 18:19:04 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:09:45.959 18:19:04 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:45.960 18:19:04 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:45.960 18:19:04 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:45.960 18:19:04 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:45.960 18:19:04 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:09:45.960 18:19:04 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:45.960 18:19:04 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:45.960 18:19:04 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:45.960 18:19:04 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:45.960 18:19:04 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:09:45.960 18:19:04 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:45.960 18:19:04 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:45.960 18:19:04 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:45.960 18:19:04 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:09:45.960 18:19:04 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:45.960 18:19:04 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:09:45.960 18:19:04 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:09:45.960 18:19:04 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:46.221 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:46.483 Waiting for block devices as requested 00:09:46.483 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:46.745 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:46.745 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:46.745 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:52.039 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:52.039 18:19:10 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:09:52.039 18:19:10 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:52.300 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:09:52.300 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:52.300 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:09:52.562 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:09:52.823 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:52.823 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:53.084 18:19:11 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:09:53.084 18:19:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:09:53.084 18:19:11 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:09:53.084 18:19:11 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:09:53.084 18:19:11 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=66602 00:09:53.085 18:19:11 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:09:53.085 18:19:11 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:09:53.085 18:19:11 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:09:53.085 18:19:11 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:09:53.085 18:19:11 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:09:53.085 18:19:11 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:09:53.085 18:19:11 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:09:53.085 18:19:11 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:09:53.085 18:19:11 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:09:53.085 18:19:11 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:09:53.085 18:19:11 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:09:53.085 18:19:11 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:09:53.085 18:19:11 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:09:53.085 18:19:11 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:09:53.347 Initializing NVMe Controllers 00:09:53.347 Attaching to 0000:00:10.0 00:09:53.347 Attaching to 0000:00:11.0 00:09:53.347 Attached to 0000:00:10.0 00:09:53.347 Attached to 0000:00:11.0 00:09:53.347 Initialization complete. Starting I/O... 00:09:53.347 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:09:53.347 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:09:53.347 00:09:54.292 QEMU NVMe Ctrl (12340 ): 2224 I/Os completed (+2224) 00:09:54.292 QEMU NVMe Ctrl (12341 ): 2225 I/Os completed (+2225) 00:09:54.292 00:09:55.236 QEMU NVMe Ctrl (12340 ): 4979 I/Os completed (+2755) 00:09:55.236 QEMU NVMe Ctrl (12341 ): 4981 I/Os completed (+2756) 00:09:55.236 00:09:56.170 QEMU NVMe Ctrl (12340 ): 8732 I/Os completed (+3753) 00:09:56.170 QEMU NVMe Ctrl (12341 ): 8737 I/Os completed (+3756) 00:09:56.170 00:09:57.547 QEMU NVMe Ctrl (12340 ): 12398 I/Os completed (+3666) 00:09:57.547 QEMU NVMe Ctrl (12341 ): 12410 I/Os completed (+3673) 00:09:57.547 00:09:58.491 QEMU NVMe Ctrl (12340 ): 15387 I/Os completed (+2989) 00:09:58.491 QEMU NVMe Ctrl (12341 ): 15378 I/Os completed (+2968) 00:09:58.491 00:09:59.065 18:19:17 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:59.065 18:19:17 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:59.066 18:19:17 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:59.066 [2024-11-20 18:19:17.573391] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:09:59.066 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:59.066 [2024-11-20 18:19:17.575412] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.066 [2024-11-20 18:19:17.575514] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.066 [2024-11-20 18:19:17.575551] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.066 [2024-11-20 18:19:17.575592] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.066 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:59.066 [2024-11-20 18:19:17.578352] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.066 [2024-11-20 18:19:17.578458] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.066 [2024-11-20 18:19:17.578494] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.066 [2024-11-20 18:19:17.578532] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.066 18:19:17 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:59.066 18:19:17 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:59.066 [2024-11-20 18:19:17.597934] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:09:59.066 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:59.066 [2024-11-20 18:19:17.599406] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.066 [2024-11-20 18:19:17.599561] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.066 [2024-11-20 18:19:17.599651] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.066 [2024-11-20 18:19:17.599684] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.066 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:59.066 [2024-11-20 18:19:17.601712] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.066 [2024-11-20 18:19:17.601760] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.066 [2024-11-20 18:19:17.601777] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.066 [2024-11-20 18:19:17.601790] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.066 18:19:17 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:59.066 18:19:17 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:59.066 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:09:59.066 EAL: Scan for (pci) bus failed. 00:09:59.328 18:19:17 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:59.328 18:19:17 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:59.328 18:19:17 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:59.328 00:09:59.328 18:19:17 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:59.328 18:19:17 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:59.328 18:19:17 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:59.328 18:19:17 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:59.328 18:19:17 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:59.328 Attaching to 0000:00:10.0 00:09:59.328 Attached to 0000:00:10.0 00:09:59.328 18:19:17 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:59.328 18:19:17 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:59.328 18:19:17 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:59.328 Attaching to 0000:00:11.0 00:09:59.328 Attached to 0000:00:11.0 00:10:00.268 QEMU NVMe Ctrl (12340 ): 2956 I/Os completed (+2956) 00:10:00.268 QEMU NVMe Ctrl (12341 ): 2726 I/Os completed (+2726) 00:10:00.268 00:10:01.210 QEMU NVMe Ctrl (12340 ): 5744 I/Os completed (+2788) 00:10:01.210 QEMU NVMe Ctrl (12341 ): 5526 I/Os completed (+2800) 00:10:01.210 00:10:02.153 QEMU NVMe Ctrl (12340 ): 8528 I/Os completed (+2784) 00:10:02.153 QEMU NVMe Ctrl (12341 ): 8312 I/Os completed (+2786) 00:10:02.153 00:10:03.574 QEMU NVMe Ctrl (12340 ): 12135 I/Os completed (+3607) 00:10:03.574 QEMU NVMe Ctrl (12341 ): 11888 I/Os completed (+3576) 00:10:03.574 00:10:04.518 QEMU NVMe Ctrl (12340 ): 14971 I/Os completed (+2836) 00:10:04.518 QEMU NVMe Ctrl (12341 ): 14736 I/Os completed (+2848) 00:10:04.518 00:10:05.462 QEMU NVMe Ctrl (12340 ): 17783 I/Os completed (+2812) 00:10:05.462 QEMU NVMe Ctrl (12341 ): 17548 I/Os completed (+2812) 00:10:05.462 00:10:06.394 QEMU NVMe Ctrl (12340 ): 21127 I/Os completed (+3344) 00:10:06.394 QEMU NVMe Ctrl (12341 ): 20888 I/Os completed (+3340) 00:10:06.394 00:10:07.332 QEMU NVMe Ctrl (12340 ): 24784 I/Os completed (+3657) 00:10:07.332 QEMU NVMe Ctrl (12341 ): 24518 I/Os completed (+3630) 00:10:07.332 00:10:08.276 QEMU NVMe Ctrl (12340 ): 27996 I/Os completed (+3212) 00:10:08.276 QEMU NVMe Ctrl (12341 ): 27728 I/Os completed (+3210) 00:10:08.276 00:10:09.219 QEMU NVMe Ctrl (12340 ): 31180 I/Os completed (+3184) 00:10:09.219 QEMU NVMe Ctrl (12341 ): 30911 I/Os completed (+3183) 00:10:09.219 00:10:10.163 QEMU NVMe Ctrl (12340 ): 34108 I/Os completed (+2928) 00:10:10.163 QEMU NVMe Ctrl (12341 ): 33840 I/Os completed (+2929) 00:10:10.163 00:10:11.550 QEMU NVMe Ctrl (12340 ): 36724 I/Os completed (+2616) 00:10:11.550 QEMU NVMe Ctrl (12341 ): 36457 I/Os completed (+2617) 00:10:11.550 00:10:11.550 18:19:29 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:11.550 18:19:29 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:11.550 18:19:29 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:11.550 18:19:29 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:11.550 [2024-11-20 18:19:29.923970] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:11.550 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:11.550 [2024-11-20 18:19:29.925643] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.550 [2024-11-20 18:19:29.925855] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.550 [2024-11-20 18:19:29.925900] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.550 [2024-11-20 18:19:29.925937] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.550 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:11.550 [2024-11-20 18:19:29.928283] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.550 [2024-11-20 18:19:29.928461] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.550 [2024-11-20 18:19:29.928484] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.550 [2024-11-20 18:19:29.928504] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.550 18:19:29 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:11.550 18:19:29 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:11.550 [2024-11-20 18:19:29.947961] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:11.550 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:11.550 [2024-11-20 18:19:29.949233] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.550 [2024-11-20 18:19:29.949445] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.550 [2024-11-20 18:19:29.949477] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.550 [2024-11-20 18:19:29.949495] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.550 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:11.550 [2024-11-20 18:19:29.951715] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.550 [2024-11-20 18:19:29.951912] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.550 [2024-11-20 18:19:29.951957] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.550 [2024-11-20 18:19:29.952040] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.550 18:19:29 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:11.550 18:19:29 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:11.550 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:11.550 EAL: Scan for (pci) bus failed. 00:10:11.550 18:19:30 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:11.550 18:19:30 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:11.550 18:19:30 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:11.810 18:19:30 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:11.810 18:19:30 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:11.810 18:19:30 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:11.810 18:19:30 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:11.810 18:19:30 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:11.810 Attaching to 0000:00:10.0 00:10:11.810 Attached to 0000:00:10.0 00:10:11.810 18:19:30 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:11.810 18:19:30 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:11.810 18:19:30 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:11.810 Attaching to 0000:00:11.0 00:10:11.810 Attached to 0000:00:11.0 00:10:12.381 QEMU NVMe Ctrl (12340 ): 1612 I/Os completed (+1612) 00:10:12.381 QEMU NVMe Ctrl (12341 ): 1316 I/Os completed (+1316) 00:10:12.381 00:10:13.320 QEMU NVMe Ctrl (12340 ): 4628 I/Os completed (+3016) 00:10:13.320 QEMU NVMe Ctrl (12341 ): 4334 I/Os completed (+3018) 00:10:13.320 00:10:14.254 QEMU NVMe Ctrl (12340 ): 8278 I/Os completed (+3650) 00:10:14.254 QEMU NVMe Ctrl (12341 ): 7984 I/Os completed (+3650) 00:10:14.254 00:10:15.189 QEMU NVMe Ctrl (12340 ): 12205 I/Os completed (+3927) 00:10:15.189 QEMU NVMe Ctrl (12341 ): 11917 I/Os completed (+3933) 00:10:15.189 00:10:16.572 QEMU NVMe Ctrl (12340 ): 15581 I/Os completed (+3376) 00:10:16.572 QEMU NVMe Ctrl (12341 ): 15256 I/Os completed (+3339) 00:10:16.572 00:10:17.517 QEMU NVMe Ctrl (12340 ): 18233 I/Os completed (+2652) 00:10:17.517 QEMU NVMe Ctrl (12341 ): 17908 I/Os completed (+2652) 00:10:17.517 00:10:18.462 QEMU NVMe Ctrl (12340 ): 20877 I/Os completed (+2644) 00:10:18.462 QEMU NVMe Ctrl (12341 ): 20557 I/Os completed (+2649) 00:10:18.462 00:10:19.405 QEMU NVMe Ctrl (12340 ): 23489 I/Os completed (+2612) 00:10:19.405 QEMU NVMe Ctrl (12341 ): 23174 I/Os completed (+2617) 00:10:19.405 00:10:20.351 QEMU NVMe Ctrl (12340 ): 26101 I/Os completed (+2612) 00:10:20.351 QEMU NVMe Ctrl (12341 ): 25786 I/Os completed (+2612) 00:10:20.351 00:10:21.292 QEMU NVMe Ctrl (12340 ): 28701 I/Os completed (+2600) 00:10:21.292 QEMU NVMe Ctrl (12341 ): 28382 I/Os completed (+2596) 00:10:21.292 00:10:22.232 QEMU NVMe Ctrl (12340 ): 31514 I/Os completed (+2813) 00:10:22.232 QEMU NVMe Ctrl (12341 ): 31188 I/Os completed (+2806) 00:10:22.232 00:10:23.166 QEMU NVMe Ctrl (12340 ): 34957 I/Os completed (+3443) 00:10:23.166 QEMU NVMe Ctrl (12341 ): 34453 I/Os completed (+3265) 00:10:23.166 00:10:23.731 18:19:42 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:23.731 18:19:42 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:23.731 18:19:42 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:23.731 18:19:42 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:23.731 [2024-11-20 18:19:42.288272] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:23.731 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:23.731 [2024-11-20 18:19:42.289542] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:23.731 [2024-11-20 18:19:42.289668] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:23.731 [2024-11-20 18:19:42.289707] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:23.731 [2024-11-20 18:19:42.289777] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:23.731 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:23.731 [2024-11-20 18:19:42.291747] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:23.731 [2024-11-20 18:19:42.291854] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:23.731 [2024-11-20 18:19:42.291891] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:23.731 [2024-11-20 18:19:42.291957] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:23.731 18:19:42 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:23.731 18:19:42 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:23.731 [2024-11-20 18:19:42.311519] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:23.731 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:23.731 [2024-11-20 18:19:42.312677] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:23.731 [2024-11-20 18:19:42.312783] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:23.731 [2024-11-20 18:19:42.312861] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:23.731 [2024-11-20 18:19:42.312894] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:23.731 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:23.731 [2024-11-20 18:19:42.316111] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:23.731 [2024-11-20 18:19:42.316206] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:23.731 [2024-11-20 18:19:42.316280] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:23.731 [2024-11-20 18:19:42.316310] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:23.731 18:19:42 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:23.731 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:23.731 EAL: Scan for (pci) bus failed. 00:10:23.731 18:19:42 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:23.989 18:19:42 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:23.989 18:19:42 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:23.989 18:19:42 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:23.989 18:19:42 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:23.989 18:19:42 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:23.989 18:19:42 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:23.989 18:19:42 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:23.989 18:19:42 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:23.989 Attaching to 0000:00:10.0 00:10:23.989 Attached to 0000:00:10.0 00:10:23.989 18:19:42 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:23.989 18:19:42 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:23.989 18:19:42 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:23.989 Attaching to 0000:00:11.0 00:10:23.989 Attached to 0000:00:11.0 00:10:23.989 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:23.989 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:23.989 [2024-11-20 18:19:42.573470] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:10:36.219 18:19:54 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:36.219 18:19:54 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:36.219 18:19:54 sw_hotplug -- common/autotest_common.sh@719 -- # time=43.00 00:10:36.219 18:19:54 sw_hotplug -- common/autotest_common.sh@720 -- # echo 43.00 00:10:36.219 18:19:54 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:10:36.219 18:19:54 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=43.00 00:10:36.219 18:19:54 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 43.00 2 00:10:36.219 remove_attach_helper took 43.00s to complete (handling 2 nvme drive(s)) 18:19:54 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:10:42.811 18:20:00 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 66602 00:10:42.811 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (66602) - No such process 00:10:42.811 18:20:00 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 66602 00:10:42.811 18:20:00 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:10:42.811 18:20:00 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:10:42.811 18:20:00 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:10:42.811 18:20:00 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=67153 00:10:42.811 18:20:00 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:10:42.811 18:20:00 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 67153 00:10:42.811 18:20:00 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:42.811 18:20:00 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 67153 ']' 00:10:42.811 18:20:00 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:42.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:42.811 18:20:00 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:42.811 18:20:00 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:42.811 18:20:00 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:42.811 18:20:00 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:42.811 [2024-11-20 18:20:00.667357] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:10:42.811 [2024-11-20 18:20:00.667507] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67153 ] 00:10:42.811 [2024-11-20 18:20:00.832294] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:42.811 [2024-11-20 18:20:00.954665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.071 18:20:01 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:43.071 18:20:01 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:10:43.071 18:20:01 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:10:43.071 18:20:01 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.071 18:20:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:43.071 18:20:01 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.071 18:20:01 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:10:43.071 18:20:01 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:43.071 18:20:01 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:10:43.071 18:20:01 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:10:43.071 18:20:01 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:10:43.071 18:20:01 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:10:43.071 18:20:01 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:10:43.071 18:20:01 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:10:43.071 18:20:01 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:43.071 18:20:01 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:43.071 18:20:01 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:10:43.071 18:20:01 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:43.071 18:20:01 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:49.644 18:20:07 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:49.644 18:20:07 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:49.644 18:20:07 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:49.644 18:20:07 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:49.644 18:20:07 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:49.644 18:20:07 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:49.644 18:20:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:49.644 18:20:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:49.644 18:20:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:49.644 18:20:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:49.644 18:20:07 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:49.644 18:20:07 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.644 18:20:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:49.644 18:20:07 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.644 18:20:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:49.644 18:20:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:49.644 [2024-11-20 18:20:07.751066] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:49.644 [2024-11-20 18:20:07.752376] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:49.644 [2024-11-20 18:20:07.752411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:49.644 [2024-11-20 18:20:07.752424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.644 [2024-11-20 18:20:07.752441] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:49.644 [2024-11-20 18:20:07.752448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:49.644 [2024-11-20 18:20:07.752457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.645 [2024-11-20 18:20:07.752464] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:49.645 [2024-11-20 18:20:07.752472] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:49.645 [2024-11-20 18:20:07.752478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.645 [2024-11-20 18:20:07.752489] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:49.645 [2024-11-20 18:20:07.752495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:49.645 [2024-11-20 18:20:07.752503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.645 [2024-11-20 18:20:08.151055] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:49.645 [2024-11-20 18:20:08.152248] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:49.645 [2024-11-20 18:20:08.152279] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:49.645 [2024-11-20 18:20:08.152290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.645 [2024-11-20 18:20:08.152303] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:49.645 [2024-11-20 18:20:08.152311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:49.645 [2024-11-20 18:20:08.152318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.645 [2024-11-20 18:20:08.152327] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:49.645 [2024-11-20 18:20:08.152333] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:49.645 [2024-11-20 18:20:08.152341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.645 [2024-11-20 18:20:08.152348] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:49.645 [2024-11-20 18:20:08.152356] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:49.645 [2024-11-20 18:20:08.152362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.645 18:20:08 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:49.645 18:20:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:49.645 18:20:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:49.645 18:20:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:49.645 18:20:08 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:49.645 18:20:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:49.645 18:20:08 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.645 18:20:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:49.645 18:20:08 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.903 18:20:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:49.903 18:20:08 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:49.903 18:20:08 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:49.903 18:20:08 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:49.903 18:20:08 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:49.903 18:20:08 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:49.903 18:20:08 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:49.903 18:20:08 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:49.903 18:20:08 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:49.903 18:20:08 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:49.903 18:20:08 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:49.903 18:20:08 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:49.903 18:20:08 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:02.107 18:20:20 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:02.107 18:20:20 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:02.107 18:20:20 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:02.107 18:20:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:02.107 18:20:20 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:02.107 18:20:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:02.107 18:20:20 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.107 18:20:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:02.107 18:20:20 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.107 18:20:20 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:02.107 18:20:20 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:02.107 18:20:20 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:02.107 18:20:20 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:02.107 [2024-11-20 18:20:20.551303] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:02.107 [2024-11-20 18:20:20.552726] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:02.107 [2024-11-20 18:20:20.552835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:02.107 [2024-11-20 18:20:20.552903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.107 [2024-11-20 18:20:20.552959] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:02.107 [2024-11-20 18:20:20.552978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:02.107 [2024-11-20 18:20:20.553036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.107 [2024-11-20 18:20:20.553065] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:02.107 [2024-11-20 18:20:20.553082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:02.107 [2024-11-20 18:20:20.553140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.107 [2024-11-20 18:20:20.553255] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:02.107 [2024-11-20 18:20:20.553307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:02.107 [2024-11-20 18:20:20.553353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.107 18:20:20 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:02.107 18:20:20 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:02.107 18:20:20 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:02.107 18:20:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:02.107 18:20:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:02.107 18:20:20 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:02.107 18:20:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:02.107 18:20:20 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.107 18:20:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:02.107 18:20:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:02.107 18:20:20 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.107 18:20:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:02.107 18:20:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:02.368 [2024-11-20 18:20:20.951303] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:02.368 [2024-11-20 18:20:20.952430] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:02.368 [2024-11-20 18:20:20.952459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:02.368 [2024-11-20 18:20:20.952475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.368 [2024-11-20 18:20:20.952488] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:02.368 [2024-11-20 18:20:20.952498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:02.368 [2024-11-20 18:20:20.952505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.368 [2024-11-20 18:20:20.952513] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:02.368 [2024-11-20 18:20:20.952520] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:02.368 [2024-11-20 18:20:20.952527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.368 [2024-11-20 18:20:20.952534] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:02.368 [2024-11-20 18:20:20.952542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:02.368 [2024-11-20 18:20:20.952548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.626 18:20:21 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:02.626 18:20:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:02.626 18:20:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:02.626 18:20:21 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:02.626 18:20:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:02.626 18:20:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:02.626 18:20:21 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.626 18:20:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:02.626 18:20:21 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.626 18:20:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:02.626 18:20:21 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:02.626 18:20:21 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:02.626 18:20:21 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:02.626 18:20:21 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:02.884 18:20:21 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:02.884 18:20:21 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:02.884 18:20:21 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:02.884 18:20:21 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:02.884 18:20:21 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:02.884 18:20:21 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:02.884 18:20:21 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:02.884 18:20:21 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:15.083 18:20:33 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:15.083 18:20:33 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:15.083 18:20:33 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:15.083 18:20:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:15.083 18:20:33 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:15.083 18:20:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:15.083 18:20:33 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.083 18:20:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:15.083 18:20:33 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.083 18:20:33 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:15.083 18:20:33 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:15.083 18:20:33 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:15.083 18:20:33 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:15.084 18:20:33 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:15.084 18:20:33 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:15.084 [2024-11-20 18:20:33.451555] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:15.084 [2024-11-20 18:20:33.452858] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:15.084 [2024-11-20 18:20:33.452992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:15.084 [2024-11-20 18:20:33.453034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:15.084 [2024-11-20 18:20:33.453069] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:15.084 [2024-11-20 18:20:33.453086] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:15.084 [2024-11-20 18:20:33.453124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:15.084 [2024-11-20 18:20:33.453148] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:15.084 [2024-11-20 18:20:33.453165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:15.084 [2024-11-20 18:20:33.453224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:15.084 [2024-11-20 18:20:33.453253] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:15.084 [2024-11-20 18:20:33.453269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:15.084 [2024-11-20 18:20:33.453294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:15.084 18:20:33 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:15.084 18:20:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:15.084 18:20:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:15.084 18:20:33 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:15.084 18:20:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:15.084 18:20:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:15.084 18:20:33 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.084 18:20:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:15.084 18:20:33 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.084 18:20:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:15.084 18:20:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:15.651 18:20:33 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:15.651 18:20:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:15.651 18:20:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:15.651 18:20:34 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:15.651 18:20:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:15.651 18:20:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:15.651 18:20:34 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.651 18:20:34 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:15.651 18:20:34 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.651 18:20:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:15.651 18:20:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:15.651 [2024-11-20 18:20:34.051557] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:15.651 [2024-11-20 18:20:34.052750] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:15.651 [2024-11-20 18:20:34.052847] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:15.651 [2024-11-20 18:20:34.052911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:15.651 [2024-11-20 18:20:34.052943] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:15.651 [2024-11-20 18:20:34.052991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:15.651 [2024-11-20 18:20:34.053017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:15.651 [2024-11-20 18:20:34.053068] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:15.651 [2024-11-20 18:20:34.053086] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:15.651 [2024-11-20 18:20:34.053152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:15.651 [2024-11-20 18:20:34.053179] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:15.651 [2024-11-20 18:20:34.053197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:15.651 [2024-11-20 18:20:34.053252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:16.217 18:20:34 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:16.217 18:20:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:16.217 18:20:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:16.217 18:20:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:16.217 18:20:34 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:16.217 18:20:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:16.217 18:20:34 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.217 18:20:34 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:16.217 18:20:34 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.217 18:20:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:16.217 18:20:34 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:16.217 18:20:34 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:16.217 18:20:34 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:16.217 18:20:34 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:16.217 18:20:34 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:16.217 18:20:34 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:16.217 18:20:34 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:16.217 18:20:34 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:16.217 18:20:34 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:16.217 18:20:34 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:16.217 18:20:34 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:16.217 18:20:34 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:28.420 18:20:46 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:28.420 18:20:46 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:28.420 18:20:46 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:28.420 18:20:46 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:28.420 18:20:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:28.420 18:20:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:28.420 18:20:46 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.420 18:20:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:28.420 18:20:46 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.420 18:20:46 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:28.420 18:20:46 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:28.420 18:20:46 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.20 00:11:28.420 18:20:46 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.20 00:11:28.420 18:20:46 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:11:28.420 18:20:46 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.20 00:11:28.420 18:20:46 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.20 2 00:11:28.420 remove_attach_helper took 45.20s to complete (handling 2 nvme drive(s)) 18:20:46 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:11:28.420 18:20:46 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.420 18:20:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:28.420 18:20:46 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.420 18:20:46 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:28.420 18:20:46 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.420 18:20:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:28.420 18:20:46 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.420 18:20:46 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:11:28.420 18:20:46 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:28.420 18:20:46 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:28.420 18:20:46 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:11:28.420 18:20:46 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:11:28.420 18:20:46 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:11:28.420 18:20:46 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:11:28.420 18:20:46 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:11:28.420 18:20:46 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:28.420 18:20:46 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:28.420 18:20:46 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:28.420 18:20:46 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:28.420 18:20:46 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:34.975 18:20:52 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:34.975 18:20:52 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:34.975 18:20:52 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:34.975 18:20:52 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:34.975 18:20:52 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:34.975 18:20:52 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:34.975 18:20:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:34.975 18:20:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:34.975 18:20:52 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:34.975 18:20:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:34.975 18:20:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:34.975 18:20:52 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.975 18:20:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:34.975 18:20:52 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.975 18:20:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:34.975 18:20:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:34.975 [2024-11-20 18:20:52.978960] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:34.975 [2024-11-20 18:20:52.979871] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:34.975 [2024-11-20 18:20:52.979996] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:34.976 [2024-11-20 18:20:52.980011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:34.976 [2024-11-20 18:20:52.980028] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:34.976 [2024-11-20 18:20:52.980035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:34.976 [2024-11-20 18:20:52.980044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:34.976 [2024-11-20 18:20:52.980051] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:34.976 [2024-11-20 18:20:52.980059] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:34.976 [2024-11-20 18:20:52.980066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:34.976 [2024-11-20 18:20:52.980074] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:34.976 [2024-11-20 18:20:52.980081] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:34.976 [2024-11-20 18:20:52.980090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:34.976 18:20:53 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:34.976 18:20:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:34.976 18:20:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:34.976 18:20:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:34.976 18:20:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:34.976 18:20:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:34.976 18:20:53 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.976 18:20:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:34.976 18:20:53 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.976 18:20:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:34.976 18:20:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:35.234 [2024-11-20 18:20:53.679151] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:35.234 [2024-11-20 18:20:53.680270] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:35.234 [2024-11-20 18:20:53.680300] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:35.234 [2024-11-20 18:20:53.680312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:35.234 [2024-11-20 18:20:53.680326] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:35.234 [2024-11-20 18:20:53.680334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:35.234 [2024-11-20 18:20:53.680342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:35.234 [2024-11-20 18:20:53.680351] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:35.234 [2024-11-20 18:20:53.680357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:35.234 [2024-11-20 18:20:53.680365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:35.234 [2024-11-20 18:20:53.680373] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:35.234 [2024-11-20 18:20:53.680380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:35.234 [2024-11-20 18:20:53.680387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:35.234 [2024-11-20 18:20:53.680397] bdev_nvme.c:5568:aer_cb: *WARNING*: AER request execute failed 00:11:35.234 [2024-11-20 18:20:53.680405] bdev_nvme.c:5568:aer_cb: *WARNING*: AER request execute failed 00:11:35.234 [2024-11-20 18:20:53.680414] bdev_nvme.c:5568:aer_cb: *WARNING*: AER request execute failed 00:11:35.234 [2024-11-20 18:20:53.680419] bdev_nvme.c:5568:aer_cb: *WARNING*: AER request execute failed 00:11:35.493 18:20:54 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:35.493 18:20:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:35.493 18:20:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:35.493 18:20:54 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:35.493 18:20:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:35.493 18:20:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:35.493 18:20:54 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.493 18:20:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:35.493 18:20:54 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.493 18:20:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:35.493 18:20:54 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:35.493 18:20:54 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:35.493 18:20:54 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:35.493 18:20:54 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:35.751 18:20:54 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:35.751 18:20:54 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:35.751 18:20:54 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:35.751 18:20:54 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:35.751 18:20:54 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:35.751 18:20:54 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:35.751 18:20:54 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:35.751 18:20:54 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:47.948 18:21:06 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:47.948 18:21:06 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:47.948 18:21:06 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:47.948 18:21:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:47.948 18:21:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:47.948 18:21:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:47.948 18:21:06 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.948 18:21:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:47.948 18:21:06 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.948 18:21:06 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:47.948 18:21:06 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:47.948 18:21:06 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:47.948 18:21:06 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:47.948 18:21:06 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:47.948 18:21:06 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:47.948 18:21:06 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:47.948 18:21:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:47.948 18:21:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:47.948 18:21:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:47.948 18:21:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:47.948 18:21:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:47.948 18:21:06 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.948 18:21:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:47.948 18:21:06 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.948 18:21:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:47.948 18:21:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:47.948 [2024-11-20 18:21:06.379407] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:47.948 [2024-11-20 18:21:06.380360] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:47.948 [2024-11-20 18:21:06.380458] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:47.948 [2024-11-20 18:21:06.380517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.948 [2024-11-20 18:21:06.380572] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:47.948 [2024-11-20 18:21:06.380591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:47.948 [2024-11-20 18:21:06.380638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.948 [2024-11-20 18:21:06.380663] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:47.948 [2024-11-20 18:21:06.380681] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:47.948 [2024-11-20 18:21:06.380734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.948 [2024-11-20 18:21:06.380763] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:47.949 [2024-11-20 18:21:06.380779] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:47.949 [2024-11-20 18:21:06.380805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.515 18:21:06 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:48.515 18:21:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:48.515 18:21:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:48.515 18:21:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:48.515 18:21:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:48.515 18:21:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:48.515 18:21:06 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.515 18:21:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:48.515 18:21:06 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.515 [2024-11-20 18:21:06.879410] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:48.515 [2024-11-20 18:21:06.881927] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:48.515 [2024-11-20 18:21:06.882034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:48.515 [2024-11-20 18:21:06.882106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.515 [2024-11-20 18:21:06.882162] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:48.515 [2024-11-20 18:21:06.882210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:48.515 [2024-11-20 18:21:06.882257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.515 [2024-11-20 18:21:06.882287] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:48.515 [2024-11-20 18:21:06.882303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:48.515 [2024-11-20 18:21:06.882356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.515 [2024-11-20 18:21:06.882403] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:48.515 [2024-11-20 18:21:06.882421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:48.515 [2024-11-20 18:21:06.882444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.515 18:21:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:48.515 18:21:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:48.773 18:21:07 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:48.773 18:21:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:49.032 18:21:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:49.032 18:21:07 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:49.032 18:21:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:49.032 18:21:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:49.032 18:21:07 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.032 18:21:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:49.032 18:21:07 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.032 18:21:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:49.032 18:21:07 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:49.032 18:21:07 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:49.032 18:21:07 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:49.032 18:21:07 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:49.032 18:21:07 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:49.032 18:21:07 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:49.032 18:21:07 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:49.032 18:21:07 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:49.032 18:21:07 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:49.032 18:21:07 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:49.032 18:21:07 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:49.032 18:21:07 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:01.234 18:21:19 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:01.234 18:21:19 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:01.234 18:21:19 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:01.234 18:21:19 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:01.234 18:21:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:01.234 18:21:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:01.234 18:21:19 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.234 18:21:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:01.234 18:21:19 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.234 18:21:19 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:01.234 18:21:19 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:01.234 18:21:19 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:01.234 18:21:19 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:01.234 18:21:19 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:01.234 18:21:19 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:01.234 18:21:19 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:01.234 18:21:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:01.234 18:21:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:01.234 18:21:19 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:01.234 18:21:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:01.234 18:21:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:01.234 18:21:19 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.234 18:21:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:01.234 18:21:19 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.234 18:21:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:01.234 18:21:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:01.234 [2024-11-20 18:21:19.779632] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:01.234 [2024-11-20 18:21:19.780522] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:01.234 [2024-11-20 18:21:19.780548] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:01.234 [2024-11-20 18:21:19.780559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:01.234 [2024-11-20 18:21:19.780578] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:01.234 [2024-11-20 18:21:19.780585] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:01.234 [2024-11-20 18:21:19.780593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:01.234 [2024-11-20 18:21:19.780600] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:01.234 [2024-11-20 18:21:19.780608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:01.234 [2024-11-20 18:21:19.780615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:01.234 [2024-11-20 18:21:19.780623] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:01.234 [2024-11-20 18:21:19.780630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:01.234 [2024-11-20 18:21:19.780638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:01.800 18:21:20 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:01.801 18:21:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:01.801 18:21:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:01.801 18:21:20 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:01.801 18:21:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:01.801 18:21:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:01.801 18:21:20 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.801 18:21:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:01.801 18:21:20 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.801 18:21:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:01.801 18:21:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:02.059 [2024-11-20 18:21:20.479641] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:02.059 [2024-11-20 18:21:20.480479] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:02.059 [2024-11-20 18:21:20.480505] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:02.059 [2024-11-20 18:21:20.480517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:02.059 [2024-11-20 18:21:20.480531] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:02.059 [2024-11-20 18:21:20.480540] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:02.059 [2024-11-20 18:21:20.480547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:02.059 [2024-11-20 18:21:20.480556] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:02.059 [2024-11-20 18:21:20.480563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:02.059 [2024-11-20 18:21:20.480573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:02.059 [2024-11-20 18:21:20.480580] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:02.059 [2024-11-20 18:21:20.480588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:02.060 [2024-11-20 18:21:20.480594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:02.318 18:21:20 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:02.318 18:21:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:02.318 18:21:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:02.318 18:21:20 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:02.318 18:21:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:02.318 18:21:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:02.319 18:21:20 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.319 18:21:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:02.319 18:21:20 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.319 18:21:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:02.319 18:21:20 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:02.319 18:21:20 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:02.319 18:21:20 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:02.319 18:21:20 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:02.577 18:21:20 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:02.577 18:21:20 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:02.577 18:21:20 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:02.577 18:21:20 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:02.577 18:21:20 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:02.577 18:21:21 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:02.577 18:21:21 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:02.577 18:21:21 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:14.794 18:21:33 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:14.794 18:21:33 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:14.794 18:21:33 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:14.794 18:21:33 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:14.794 18:21:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:14.794 18:21:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:14.794 18:21:33 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.794 18:21:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:14.794 18:21:33 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.794 18:21:33 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:14.794 18:21:33 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:14.794 18:21:33 sw_hotplug -- common/autotest_common.sh@719 -- # time=46.21 00:12:14.794 18:21:33 sw_hotplug -- common/autotest_common.sh@720 -- # echo 46.21 00:12:14.794 18:21:33 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:12:14.794 18:21:33 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=46.21 00:12:14.794 18:21:33 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 46.21 2 00:12:14.794 remove_attach_helper took 46.21s to complete (handling 2 nvme drive(s)) 18:21:33 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:12:14.794 18:21:33 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 67153 00:12:14.794 18:21:33 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 67153 ']' 00:12:14.794 18:21:33 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 67153 00:12:14.794 18:21:33 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:12:14.794 18:21:33 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:14.794 18:21:33 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67153 00:12:14.794 killing process with pid 67153 00:12:14.794 18:21:33 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:14.794 18:21:33 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:14.794 18:21:33 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67153' 00:12:14.794 18:21:33 sw_hotplug -- common/autotest_common.sh@973 -- # kill 67153 00:12:14.794 18:21:33 sw_hotplug -- common/autotest_common.sh@978 -- # wait 67153 00:12:15.732 18:21:34 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:15.993 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:16.567 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:16.567 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:16.567 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:16.828 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:16.828 00:12:16.828 real 2m31.396s 00:12:16.828 user 1m52.998s 00:12:16.828 sys 0m16.959s 00:12:16.828 18:21:35 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:16.828 18:21:35 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:16.828 ************************************ 00:12:16.828 END TEST sw_hotplug 00:12:16.828 ************************************ 00:12:16.828 18:21:35 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:12:16.828 18:21:35 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:16.828 18:21:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:16.828 18:21:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:16.828 18:21:35 -- common/autotest_common.sh@10 -- # set +x 00:12:16.828 ************************************ 00:12:16.828 START TEST nvme_xnvme 00:12:16.828 ************************************ 00:12:16.828 18:21:35 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:16.828 * Looking for test storage... 00:12:16.828 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:16.828 18:21:35 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:16.828 18:21:35 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:12:16.828 18:21:35 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:17.092 18:21:35 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:17.092 18:21:35 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:17.092 18:21:35 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:17.092 18:21:35 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:17.092 18:21:35 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:17.093 18:21:35 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:17.093 18:21:35 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:17.093 18:21:35 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:17.093 18:21:35 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:17.093 18:21:35 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:17.093 18:21:35 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:17.093 18:21:35 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:17.093 18:21:35 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:17.093 18:21:35 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:17.093 18:21:35 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:17.093 18:21:35 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:17.093 18:21:35 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:17.093 18:21:35 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:17.093 18:21:35 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:17.093 18:21:35 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:17.093 18:21:35 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:17.093 18:21:35 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:17.093 18:21:35 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:17.093 18:21:35 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:17.093 18:21:35 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:17.093 18:21:35 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:17.093 18:21:35 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:17.093 18:21:35 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:17.093 18:21:35 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:17.093 18:21:35 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:17.093 18:21:35 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:17.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.093 --rc genhtml_branch_coverage=1 00:12:17.093 --rc genhtml_function_coverage=1 00:12:17.093 --rc genhtml_legend=1 00:12:17.093 --rc geninfo_all_blocks=1 00:12:17.093 --rc geninfo_unexecuted_blocks=1 00:12:17.093 00:12:17.093 ' 00:12:17.093 18:21:35 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:17.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.093 --rc genhtml_branch_coverage=1 00:12:17.093 --rc genhtml_function_coverage=1 00:12:17.093 --rc genhtml_legend=1 00:12:17.093 --rc geninfo_all_blocks=1 00:12:17.093 --rc geninfo_unexecuted_blocks=1 00:12:17.093 00:12:17.093 ' 00:12:17.093 18:21:35 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:17.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.093 --rc genhtml_branch_coverage=1 00:12:17.093 --rc genhtml_function_coverage=1 00:12:17.093 --rc genhtml_legend=1 00:12:17.093 --rc geninfo_all_blocks=1 00:12:17.093 --rc geninfo_unexecuted_blocks=1 00:12:17.093 00:12:17.093 ' 00:12:17.093 18:21:35 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:17.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.093 --rc genhtml_branch_coverage=1 00:12:17.093 --rc genhtml_function_coverage=1 00:12:17.093 --rc genhtml_legend=1 00:12:17.093 --rc geninfo_all_blocks=1 00:12:17.093 --rc geninfo_unexecuted_blocks=1 00:12:17.093 00:12:17.093 ' 00:12:17.093 18:21:35 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:12:17.093 18:21:35 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:12:17.093 18:21:35 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:12:17.093 18:21:35 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:12:17.093 18:21:35 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:12:17.093 18:21:35 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:12:17.093 18:21:35 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:12:17.093 18:21:35 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:12:17.093 18:21:35 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:12:17.093 18:21:35 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:12:17.093 18:21:35 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:12:17.093 18:21:35 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:12:17.093 18:21:35 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:12:17.093 18:21:35 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:12:17.093 18:21:35 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:12:17.093 18:21:35 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:12:17.093 18:21:35 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:12:17.093 18:21:35 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:12:17.093 18:21:35 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:12:17.093 18:21:35 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:12:17.093 18:21:35 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:12:17.093 18:21:35 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:12:17.093 18:21:35 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:12:17.093 18:21:35 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:12:17.093 18:21:35 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:12:17.093 18:21:35 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:12:17.093 18:21:35 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:12:17.093 18:21:35 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:12:17.093 18:21:35 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:12:17.093 18:21:35 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:12:17.093 18:21:35 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:12:17.093 18:21:35 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:12:17.093 18:21:35 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:12:17.093 18:21:35 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:12:17.093 18:21:35 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:12:17.093 18:21:35 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:12:17.093 18:21:35 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:12:17.093 18:21:35 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:12:17.093 18:21:35 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:12:17.093 18:21:35 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:12:17.093 18:21:35 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:12:17.093 18:21:35 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:12:17.093 18:21:35 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:12:17.093 18:21:35 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:12:17.093 18:21:35 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:12:17.093 18:21:35 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:12:17.093 18:21:35 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:12:17.093 18:21:35 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:12:17.093 18:21:35 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:12:17.093 18:21:35 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:12:17.093 18:21:35 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:12:17.093 18:21:35 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:12:17.093 18:21:35 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:12:17.093 18:21:35 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:12:17.093 18:21:35 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:12:17.093 18:21:35 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:12:17.093 18:21:35 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:12:17.093 18:21:35 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:12:17.093 18:21:35 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:12:17.093 18:21:35 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:12:17.093 18:21:35 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:12:17.093 18:21:35 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:12:17.093 18:21:35 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:12:17.093 18:21:35 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:12:17.093 18:21:35 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:12:17.093 18:21:35 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:12:17.093 18:21:35 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:12:17.093 18:21:35 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:12:17.093 18:21:35 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:12:17.093 18:21:35 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:12:17.093 18:21:35 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:12:17.093 18:21:35 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:12:17.093 18:21:35 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:12:17.093 18:21:35 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:12:17.093 18:21:35 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:12:17.093 18:21:35 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:12:17.093 18:21:35 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:12:17.093 18:21:35 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:12:17.093 18:21:35 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:12:17.093 18:21:35 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:12:17.093 18:21:35 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:12:17.093 18:21:35 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:12:17.093 18:21:35 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:12:17.093 18:21:35 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:12:17.094 18:21:35 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:12:17.094 18:21:35 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:12:17.094 18:21:35 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:12:17.094 18:21:35 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:12:17.094 18:21:35 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:12:17.094 18:21:35 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:12:17.094 18:21:35 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:12:17.094 18:21:35 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:12:17.094 18:21:35 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:12:17.094 18:21:35 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:12:17.094 18:21:35 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:12:17.094 18:21:35 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:12:17.094 18:21:35 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:12:17.094 18:21:35 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:12:17.094 18:21:35 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:12:17.094 18:21:35 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:12:17.094 18:21:35 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:12:17.094 18:21:35 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:12:17.094 18:21:35 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:12:17.094 18:21:35 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:12:17.094 18:21:35 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:12:17.094 18:21:35 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:12:17.094 18:21:35 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:12:17.094 18:21:35 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:12:17.094 18:21:35 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:12:17.094 18:21:35 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:12:17.094 18:21:35 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:12:17.094 18:21:35 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:12:17.094 18:21:35 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:12:17.094 18:21:35 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:12:17.094 18:21:35 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:12:17.094 18:21:35 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:12:17.094 #define SPDK_CONFIG_H 00:12:17.094 #define SPDK_CONFIG_AIO_FSDEV 1 00:12:17.094 #define SPDK_CONFIG_APPS 1 00:12:17.094 #define SPDK_CONFIG_ARCH native 00:12:17.094 #define SPDK_CONFIG_ASAN 1 00:12:17.094 #undef SPDK_CONFIG_AVAHI 00:12:17.094 #undef SPDK_CONFIG_CET 00:12:17.094 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:12:17.094 #define SPDK_CONFIG_COVERAGE 1 00:12:17.094 #define SPDK_CONFIG_CROSS_PREFIX 00:12:17.094 #undef SPDK_CONFIG_CRYPTO 00:12:17.094 #undef SPDK_CONFIG_CRYPTO_MLX5 00:12:17.094 #undef SPDK_CONFIG_CUSTOMOCF 00:12:17.094 #undef SPDK_CONFIG_DAOS 00:12:17.094 #define SPDK_CONFIG_DAOS_DIR 00:12:17.094 #define SPDK_CONFIG_DEBUG 1 00:12:17.094 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:12:17.094 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:12:17.094 #define SPDK_CONFIG_DPDK_INC_DIR 00:12:17.094 #define SPDK_CONFIG_DPDK_LIB_DIR 00:12:17.094 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:12:17.094 #undef SPDK_CONFIG_DPDK_UADK 00:12:17.094 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:12:17.094 #define SPDK_CONFIG_EXAMPLES 1 00:12:17.094 #undef SPDK_CONFIG_FC 00:12:17.094 #define SPDK_CONFIG_FC_PATH 00:12:17.094 #define SPDK_CONFIG_FIO_PLUGIN 1 00:12:17.094 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:12:17.094 #define SPDK_CONFIG_FSDEV 1 00:12:17.094 #undef SPDK_CONFIG_FUSE 00:12:17.094 #undef SPDK_CONFIG_FUZZER 00:12:17.094 #define SPDK_CONFIG_FUZZER_LIB 00:12:17.094 #undef SPDK_CONFIG_GOLANG 00:12:17.094 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:12:17.094 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:12:17.094 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:12:17.094 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:12:17.094 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:12:17.094 #undef SPDK_CONFIG_HAVE_LIBBSD 00:12:17.094 #undef SPDK_CONFIG_HAVE_LZ4 00:12:17.094 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:12:17.094 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:12:17.094 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:12:17.094 #define SPDK_CONFIG_IDXD 1 00:12:17.094 #define SPDK_CONFIG_IDXD_KERNEL 1 00:12:17.094 #undef SPDK_CONFIG_IPSEC_MB 00:12:17.094 #define SPDK_CONFIG_IPSEC_MB_DIR 00:12:17.094 #define SPDK_CONFIG_ISAL 1 00:12:17.094 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:12:17.094 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:12:17.094 #define SPDK_CONFIG_LIBDIR 00:12:17.094 #undef SPDK_CONFIG_LTO 00:12:17.094 #define SPDK_CONFIG_MAX_LCORES 128 00:12:17.094 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:12:17.094 #define SPDK_CONFIG_NVME_CUSE 1 00:12:17.094 #undef SPDK_CONFIG_OCF 00:12:17.094 #define SPDK_CONFIG_OCF_PATH 00:12:17.094 #define SPDK_CONFIG_OPENSSL_PATH 00:12:17.094 #undef SPDK_CONFIG_PGO_CAPTURE 00:12:17.094 #define SPDK_CONFIG_PGO_DIR 00:12:17.094 #undef SPDK_CONFIG_PGO_USE 00:12:17.094 #define SPDK_CONFIG_PREFIX /usr/local 00:12:17.094 #undef SPDK_CONFIG_RAID5F 00:12:17.094 #undef SPDK_CONFIG_RBD 00:12:17.094 #define SPDK_CONFIG_RDMA 1 00:12:17.094 #define SPDK_CONFIG_RDMA_PROV verbs 00:12:17.094 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:12:17.094 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:12:17.094 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:12:17.094 #define SPDK_CONFIG_SHARED 1 00:12:17.094 #undef SPDK_CONFIG_SMA 00:12:17.094 #define SPDK_CONFIG_TESTS 1 00:12:17.094 #undef SPDK_CONFIG_TSAN 00:12:17.094 #define SPDK_CONFIG_UBLK 1 00:12:17.094 #define SPDK_CONFIG_UBSAN 1 00:12:17.094 #undef SPDK_CONFIG_UNIT_TESTS 00:12:17.094 #undef SPDK_CONFIG_URING 00:12:17.094 #define SPDK_CONFIG_URING_PATH 00:12:17.094 #undef SPDK_CONFIG_URING_ZNS 00:12:17.094 #undef SPDK_CONFIG_USDT 00:12:17.094 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:12:17.094 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:12:17.094 #undef SPDK_CONFIG_VFIO_USER 00:12:17.094 #define SPDK_CONFIG_VFIO_USER_DIR 00:12:17.094 #define SPDK_CONFIG_VHOST 1 00:12:17.094 #define SPDK_CONFIG_VIRTIO 1 00:12:17.094 #undef SPDK_CONFIG_VTUNE 00:12:17.094 #define SPDK_CONFIG_VTUNE_DIR 00:12:17.094 #define SPDK_CONFIG_WERROR 1 00:12:17.094 #define SPDK_CONFIG_WPDK_DIR 00:12:17.094 #define SPDK_CONFIG_XNVME 1 00:12:17.094 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:12:17.094 18:21:35 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:12:17.094 18:21:35 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:17.094 18:21:35 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:17.094 18:21:35 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:17.094 18:21:35 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:17.094 18:21:35 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:17.094 18:21:35 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.094 18:21:35 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.094 18:21:35 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.094 18:21:35 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:17.094 18:21:35 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.094 18:21:35 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:12:17.094 18:21:35 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:12:17.094 18:21:35 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:12:17.094 18:21:35 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:12:17.094 18:21:35 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:12:17.094 18:21:35 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:12:17.094 18:21:35 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:12:17.094 18:21:35 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:12:17.094 18:21:35 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:12:17.094 18:21:35 nvme_xnvme -- pm/common@68 -- # uname -s 00:12:17.094 18:21:35 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:12:17.094 18:21:35 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:12:17.094 18:21:35 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:12:17.095 18:21:35 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:12:17.095 18:21:35 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:12:17.095 18:21:35 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:12:17.095 18:21:35 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:12:17.095 18:21:35 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:12:17.095 18:21:35 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:12:17.095 18:21:35 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:12:17.095 18:21:35 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:12:17.095 18:21:35 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:12:17.095 18:21:35 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:12:17.095 18:21:35 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@58 -- # : 1 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@126 -- # : 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@140 -- # : 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:12:17.095 18:21:35 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 68538 ]] 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 68538 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.4WPKXL 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.4WPKXL/tests/xnvme /tmp/spdk.4WPKXL 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13965844480 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5602074624 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6260629504 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265393152 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493362176 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506158080 00:12:17.096 18:21:35 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:12:17.097 18:21:35 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:17.097 18:21:35 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:12:17.097 18:21:35 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:12:17.097 18:21:35 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13965844480 00:12:17.097 18:21:35 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:12:17.097 18:21:35 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5602074624 00:12:17.097 18:21:35 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:17.097 18:21:35 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:12:17.097 18:21:35 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:12:17.097 18:21:35 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:12:17.097 18:21:35 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:12:17.097 18:21:35 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:12:17.097 18:21:35 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:17.097 18:21:35 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:17.097 18:21:35 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:17.097 18:21:35 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6265245696 00:12:17.097 18:21:35 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265393152 00:12:17.097 18:21:35 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=147456 00:12:17.097 18:21:35 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:17.097 18:21:35 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:12:17.097 18:21:35 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:12:17.097 18:21:35 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:12:17.097 18:21:35 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:12:17.097 18:21:35 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:12:17.097 18:21:35 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:17.097 18:21:35 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:17.097 18:21:35 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:17.097 18:21:35 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253064704 00:12:17.097 18:21:35 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253076992 00:12:17.097 18:21:35 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:12:17.097 18:21:35 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:17.097 18:21:35 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output 00:12:17.097 18:21:35 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:12:17.097 18:21:35 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=98881392640 00:12:17.097 18:21:35 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:12:17.097 18:21:35 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=821387264 00:12:17.097 18:21:35 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:17.097 18:21:35 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:12:17.097 * Looking for test storage... 00:12:17.097 18:21:35 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:12:17.097 18:21:35 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:12:17.097 18:21:35 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:12:17.097 18:21:35 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:17.097 18:21:35 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:12:17.097 18:21:35 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13965844480 00:12:17.097 18:21:35 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:12:17.097 18:21:35 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:12:17.097 18:21:35 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:12:17.097 18:21:35 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:12:17.097 18:21:35 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:12:17.097 18:21:35 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:17.097 18:21:35 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:17.097 18:21:35 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:17.097 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:17.097 18:21:35 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:12:17.097 18:21:35 nvme_xnvme -- common/autotest_common.sh@1680 -- # set -o errtrace 00:12:17.097 18:21:35 nvme_xnvme -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:12:17.097 18:21:35 nvme_xnvme -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:12:17.097 18:21:35 nvme_xnvme -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:12:17.097 18:21:35 nvme_xnvme -- common/autotest_common.sh@1685 -- # true 00:12:17.097 18:21:35 nvme_xnvme -- common/autotest_common.sh@1687 -- # xtrace_fd 00:12:17.097 18:21:35 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:12:17.097 18:21:35 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:12:17.097 18:21:35 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:12:17.097 18:21:35 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:12:17.097 18:21:35 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:12:17.097 18:21:35 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:12:17.097 18:21:35 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:12:17.097 18:21:35 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:12:17.097 18:21:35 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:17.097 18:21:35 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:17.097 18:21:35 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:12:17.097 18:21:35 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:17.097 18:21:35 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:17.097 18:21:35 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:17.097 18:21:35 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:17.097 18:21:35 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:17.097 18:21:35 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:17.097 18:21:35 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:17.097 18:21:35 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:17.097 18:21:35 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:17.097 18:21:35 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:17.097 18:21:35 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:17.097 18:21:35 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:17.097 18:21:35 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:17.097 18:21:35 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:17.097 18:21:35 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:17.097 18:21:35 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:17.097 18:21:35 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:17.097 18:21:35 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:17.097 18:21:35 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:17.097 18:21:35 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:17.097 18:21:35 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:17.097 18:21:35 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:17.097 18:21:35 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:17.097 18:21:35 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:17.097 18:21:35 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:17.097 18:21:35 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:17.098 18:21:35 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:17.098 18:21:35 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:17.098 18:21:35 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:17.098 18:21:35 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:17.098 18:21:35 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:17.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.098 --rc genhtml_branch_coverage=1 00:12:17.098 --rc genhtml_function_coverage=1 00:12:17.098 --rc genhtml_legend=1 00:12:17.098 --rc geninfo_all_blocks=1 00:12:17.098 --rc geninfo_unexecuted_blocks=1 00:12:17.098 00:12:17.098 ' 00:12:17.098 18:21:35 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:17.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.098 --rc genhtml_branch_coverage=1 00:12:17.098 --rc genhtml_function_coverage=1 00:12:17.098 --rc genhtml_legend=1 00:12:17.098 --rc geninfo_all_blocks=1 00:12:17.098 --rc geninfo_unexecuted_blocks=1 00:12:17.098 00:12:17.098 ' 00:12:17.098 18:21:35 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:17.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.098 --rc genhtml_branch_coverage=1 00:12:17.098 --rc genhtml_function_coverage=1 00:12:17.098 --rc genhtml_legend=1 00:12:17.098 --rc geninfo_all_blocks=1 00:12:17.098 --rc geninfo_unexecuted_blocks=1 00:12:17.098 00:12:17.098 ' 00:12:17.098 18:21:35 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:17.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.098 --rc genhtml_branch_coverage=1 00:12:17.098 --rc genhtml_function_coverage=1 00:12:17.098 --rc genhtml_legend=1 00:12:17.098 --rc geninfo_all_blocks=1 00:12:17.098 --rc geninfo_unexecuted_blocks=1 00:12:17.098 00:12:17.098 ' 00:12:17.098 18:21:35 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:17.098 18:21:35 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:17.098 18:21:35 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:17.098 18:21:35 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:17.098 18:21:35 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:17.098 18:21:35 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.098 18:21:35 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.098 18:21:35 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.098 18:21:35 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:17.098 18:21:35 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.098 18:21:35 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:12:17.098 18:21:35 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:12:17.098 18:21:35 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:12:17.098 18:21:35 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:12:17.098 18:21:35 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:12:17.098 18:21:35 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:12:17.098 18:21:35 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:12:17.098 18:21:35 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:12:17.098 18:21:35 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:12:17.098 18:21:35 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:12:17.098 18:21:35 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:12:17.098 18:21:35 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:12:17.098 18:21:35 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:12:17.098 18:21:35 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:12:17.098 18:21:35 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:12:17.098 18:21:35 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:12:17.098 18:21:35 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:12:17.098 18:21:35 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:12:17.098 18:21:35 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:12:17.098 18:21:35 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:12:17.098 18:21:35 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:12:17.098 18:21:35 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:17.359 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:17.621 Waiting for block devices as requested 00:12:17.621 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:17.882 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:17.882 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:17.882 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:23.251 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:23.251 18:21:41 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:12:23.513 18:21:41 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:12:23.513 18:21:41 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:12:23.513 18:21:42 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:12:23.513 18:21:42 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:12:23.513 18:21:42 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:12:23.513 18:21:42 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:12:23.513 18:21:42 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:12:23.513 No valid GPT data, bailing 00:12:23.513 18:21:42 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:12:23.774 18:21:42 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:12:23.774 18:21:42 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:12:23.774 18:21:42 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:12:23.774 18:21:42 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:12:23.774 18:21:42 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:12:23.774 18:21:42 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:12:23.774 18:21:42 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:12:23.774 18:21:42 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:12:23.774 18:21:42 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:23.774 18:21:42 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:12:23.774 18:21:42 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:12:23.774 18:21:42 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:12:23.774 18:21:42 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:12:23.774 18:21:42 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:12:23.774 18:21:42 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:12:23.774 18:21:42 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:12:23.774 18:21:42 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:23.774 18:21:42 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:23.774 18:21:42 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:23.774 ************************************ 00:12:23.774 START TEST xnvme_rpc 00:12:23.774 ************************************ 00:12:23.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:23.774 18:21:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:12:23.774 18:21:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:12:23.774 18:21:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:12:23.774 18:21:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:12:23.774 18:21:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:12:23.774 18:21:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=68923 00:12:23.774 18:21:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 68923 00:12:23.774 18:21:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 68923 ']' 00:12:23.774 18:21:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:23.774 18:21:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:23.774 18:21:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:23.774 18:21:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:23.774 18:21:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:23.774 18:21:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:23.774 [2024-11-20 18:21:42.243607] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:12:23.774 [2024-11-20 18:21:42.243748] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68923 ] 00:12:24.035 [2024-11-20 18:21:42.405866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:24.035 [2024-11-20 18:21:42.527260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:24.607 18:21:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:24.607 18:21:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:24.607 18:21:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:12:24.607 18:21:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.607 18:21:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.607 xnvme_bdev 00:12:24.607 18:21:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.607 18:21:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:12:24.607 18:21:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:24.607 18:21:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:12:24.607 18:21:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.607 18:21:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.869 18:21:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.869 18:21:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:12:24.869 18:21:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:12:24.869 18:21:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:24.869 18:21:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.869 18:21:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.869 18:21:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:12:24.869 18:21:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.869 18:21:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:12:24.869 18:21:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:12:24.869 18:21:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:24.869 18:21:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:12:24.869 18:21:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.869 18:21:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.869 18:21:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.869 18:21:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:12:24.869 18:21:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:12:24.869 18:21:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:24.869 18:21:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.869 18:21:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.869 18:21:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:12:24.869 18:21:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.869 18:21:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:12:24.869 18:21:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:12:24.869 18:21:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.869 18:21:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.869 18:21:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.869 18:21:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 68923 00:12:24.869 18:21:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 68923 ']' 00:12:24.869 18:21:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 68923 00:12:24.869 18:21:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:12:24.869 18:21:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:24.869 18:21:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68923 00:12:24.869 killing process with pid 68923 00:12:24.869 18:21:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:24.869 18:21:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:24.869 18:21:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68923' 00:12:24.869 18:21:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 68923 00:12:24.869 18:21:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 68923 00:12:26.790 00:12:26.790 real 0m2.887s 00:12:26.790 user 0m2.868s 00:12:26.790 sys 0m0.479s 00:12:26.790 18:21:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:26.790 18:21:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.790 ************************************ 00:12:26.790 END TEST xnvme_rpc 00:12:26.790 ************************************ 00:12:26.790 18:21:45 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:26.790 18:21:45 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:26.790 18:21:45 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:26.790 18:21:45 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:26.790 ************************************ 00:12:26.790 START TEST xnvme_bdevperf 00:12:26.790 ************************************ 00:12:26.790 18:21:45 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:12:26.790 18:21:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:12:26.790 18:21:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:12:26.790 18:21:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:26.790 18:21:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:12:26.790 18:21:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:26.790 18:21:45 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:26.790 18:21:45 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:26.790 { 00:12:26.790 "subsystems": [ 00:12:26.790 { 00:12:26.790 "subsystem": "bdev", 00:12:26.790 "config": [ 00:12:26.790 { 00:12:26.790 "params": { 00:12:26.790 "io_mechanism": "libaio", 00:12:26.790 "conserve_cpu": false, 00:12:26.790 "filename": "/dev/nvme0n1", 00:12:26.790 "name": "xnvme_bdev" 00:12:26.790 }, 00:12:26.790 "method": "bdev_xnvme_create" 00:12:26.790 }, 00:12:26.790 { 00:12:26.790 "method": "bdev_wait_for_examine" 00:12:26.790 } 00:12:26.790 ] 00:12:26.790 } 00:12:26.790 ] 00:12:26.790 } 00:12:26.790 [2024-11-20 18:21:45.191432] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:12:26.790 [2024-11-20 18:21:45.191595] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68996 ] 00:12:26.790 [2024-11-20 18:21:45.353450] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:27.052 [2024-11-20 18:21:45.478039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.313 Running I/O for 5 seconds... 00:12:29.201 26597.00 IOPS, 103.89 MiB/s [2024-11-20T18:21:49.219Z] 26084.50 IOPS, 101.89 MiB/s [2024-11-20T18:21:49.792Z] 25693.67 IOPS, 100.37 MiB/s [2024-11-20T18:21:51.179Z] 24992.00 IOPS, 97.62 MiB/s [2024-11-20T18:21:51.179Z] 25041.00 IOPS, 97.82 MiB/s 00:12:32.550 Latency(us) 00:12:32.550 [2024-11-20T18:21:51.179Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:32.550 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:32.550 xnvme_bdev : 5.01 25019.55 97.73 0.00 0.00 2552.98 485.22 7410.61 00:12:32.550 [2024-11-20T18:21:51.179Z] =================================================================================================================== 00:12:32.550 [2024-11-20T18:21:51.179Z] Total : 25019.55 97.73 0.00 0.00 2552.98 485.22 7410.61 00:12:33.123 18:21:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:33.123 18:21:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:33.123 18:21:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:12:33.123 18:21:51 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:33.123 18:21:51 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:33.123 { 00:12:33.123 "subsystems": [ 00:12:33.123 { 00:12:33.123 "subsystem": "bdev", 00:12:33.123 "config": [ 00:12:33.123 { 00:12:33.123 "params": { 00:12:33.123 "io_mechanism": "libaio", 00:12:33.123 "conserve_cpu": false, 00:12:33.123 "filename": "/dev/nvme0n1", 00:12:33.123 "name": "xnvme_bdev" 00:12:33.123 }, 00:12:33.123 "method": "bdev_xnvme_create" 00:12:33.123 }, 00:12:33.123 { 00:12:33.123 "method": "bdev_wait_for_examine" 00:12:33.123 } 00:12:33.123 ] 00:12:33.123 } 00:12:33.123 ] 00:12:33.123 } 00:12:33.123 [2024-11-20 18:21:51.738834] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:12:33.123 [2024-11-20 18:21:51.738974] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69074 ] 00:12:33.384 [2024-11-20 18:21:51.900309] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:33.646 [2024-11-20 18:21:52.032473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.907 Running I/O for 5 seconds... 00:12:35.796 31626.00 IOPS, 123.54 MiB/s [2024-11-20T18:21:55.811Z] 32345.00 IOPS, 126.35 MiB/s [2024-11-20T18:21:56.755Z] 33437.67 IOPS, 130.62 MiB/s [2024-11-20T18:21:57.700Z] 33659.00 IOPS, 131.48 MiB/s [2024-11-20T18:21:57.700Z] 33648.80 IOPS, 131.44 MiB/s 00:12:39.071 Latency(us) 00:12:39.071 [2024-11-20T18:21:57.700Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:39.071 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:12:39.071 xnvme_bdev : 5.00 33631.59 131.37 0.00 0.00 1898.58 450.56 10586.58 00:12:39.071 [2024-11-20T18:21:57.700Z] =================================================================================================================== 00:12:39.071 [2024-11-20T18:21:57.700Z] Total : 33631.59 131.37 0.00 0.00 1898.58 450.56 10586.58 00:12:39.644 00:12:39.644 real 0m13.073s 00:12:39.644 user 0m4.887s 00:12:39.644 sys 0m6.611s 00:12:39.644 18:21:58 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:39.644 18:21:58 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:39.644 ************************************ 00:12:39.644 END TEST xnvme_bdevperf 00:12:39.644 ************************************ 00:12:39.644 18:21:58 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:12:39.644 18:21:58 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:39.644 18:21:58 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:39.644 18:21:58 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:39.644 ************************************ 00:12:39.644 START TEST xnvme_fio_plugin 00:12:39.644 ************************************ 00:12:39.644 18:21:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:12:39.644 18:21:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:12:39.644 18:21:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:12:39.644 18:21:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:39.644 18:21:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:39.644 18:21:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:39.644 18:21:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:39.644 18:21:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:39.644 18:21:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:39.644 18:21:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:39.644 18:21:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:39.644 18:21:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:39.644 18:21:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:39.644 18:21:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:39.644 18:21:58 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:39.644 18:21:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:39.644 18:21:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:39.644 18:21:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:39.644 18:21:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:39.905 18:21:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:39.905 18:21:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:39.905 18:21:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:39.905 18:21:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:39.905 18:21:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:39.905 { 00:12:39.905 "subsystems": [ 00:12:39.905 { 00:12:39.905 "subsystem": "bdev", 00:12:39.905 "config": [ 00:12:39.905 { 00:12:39.905 "params": { 00:12:39.905 "io_mechanism": "libaio", 00:12:39.905 "conserve_cpu": false, 00:12:39.905 "filename": "/dev/nvme0n1", 00:12:39.905 "name": "xnvme_bdev" 00:12:39.905 }, 00:12:39.905 "method": "bdev_xnvme_create" 00:12:39.905 }, 00:12:39.905 { 00:12:39.905 "method": "bdev_wait_for_examine" 00:12:39.905 } 00:12:39.905 ] 00:12:39.905 } 00:12:39.905 ] 00:12:39.905 } 00:12:39.905 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:39.905 fio-3.35 00:12:39.905 Starting 1 thread 00:12:46.498 00:12:46.498 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69193: Wed Nov 20 18:22:04 2024 00:12:46.498 read: IOPS=32.0k, BW=125MiB/s (131MB/s)(626MiB/5001msec) 00:12:46.498 slat (usec): min=4, max=1955, avg=23.70, stdev=100.08 00:12:46.498 clat (usec): min=87, max=5189, avg=1363.07, stdev=541.24 00:12:46.498 lat (usec): min=198, max=5254, avg=1386.77, stdev=531.42 00:12:46.498 clat percentiles (usec): 00:12:46.498 | 1.00th=[ 281], 5.00th=[ 529], 10.00th=[ 685], 20.00th=[ 906], 00:12:46.498 | 30.00th=[ 1074], 40.00th=[ 1205], 50.00th=[ 1352], 60.00th=[ 1483], 00:12:46.498 | 70.00th=[ 1614], 80.00th=[ 1778], 90.00th=[ 2040], 95.00th=[ 2278], 00:12:46.498 | 99.00th=[ 2868], 99.50th=[ 3163], 99.90th=[ 3752], 99.95th=[ 4047], 00:12:46.498 | 99.99th=[ 4490] 00:12:46.498 bw ( KiB/s): min=121264, max=138976, per=100.00%, avg=128649.11, stdev=6275.89, samples=9 00:12:46.498 iops : min=30316, max=34744, avg=32162.22, stdev=1569.03, samples=9 00:12:46.498 lat (usec) : 100=0.01%, 250=0.70%, 500=3.76%, 750=8.09%, 1000=13.06% 00:12:46.498 lat (msec) : 2=63.41%, 4=10.94%, 10=0.05% 00:12:46.498 cpu : usr=36.44%, sys=54.56%, ctx=8, majf=0, minf=764 00:12:46.498 IO depths : 1=0.4%, 2=1.0%, 4=2.7%, 8=8.0%, 16=23.5%, 32=62.3%, >=64=2.1% 00:12:46.498 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:46.498 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.7%, >=64=0.0% 00:12:46.498 issued rwts: total=160271,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:46.498 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:46.498 00:12:46.498 Run status group 0 (all jobs): 00:12:46.498 READ: bw=125MiB/s (131MB/s), 125MiB/s-125MiB/s (131MB/s-131MB/s), io=626MiB (656MB), run=5001-5001msec 00:12:46.761 ----------------------------------------------------- 00:12:46.761 Suppressions used: 00:12:46.761 count bytes template 00:12:46.761 1 11 /usr/src/fio/parse.c 00:12:46.761 1 8 libtcmalloc_minimal.so 00:12:46.761 1 904 libcrypto.so 00:12:46.761 ----------------------------------------------------- 00:12:46.761 00:12:46.761 18:22:05 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:46.761 18:22:05 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:46.761 18:22:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:46.761 18:22:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:46.761 18:22:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:46.761 18:22:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:46.761 18:22:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:46.761 18:22:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:46.761 18:22:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:46.761 18:22:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:46.761 18:22:05 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:46.761 18:22:05 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:46.761 18:22:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:46.761 18:22:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:46.761 18:22:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:46.761 18:22:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:46.761 18:22:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:46.761 18:22:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:46.761 18:22:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:46.761 18:22:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:46.761 18:22:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:46.761 { 00:12:46.761 "subsystems": [ 00:12:46.761 { 00:12:46.761 "subsystem": "bdev", 00:12:46.761 "config": [ 00:12:46.761 { 00:12:46.761 "params": { 00:12:46.761 "io_mechanism": "libaio", 00:12:46.761 "conserve_cpu": false, 00:12:46.761 "filename": "/dev/nvme0n1", 00:12:46.761 "name": "xnvme_bdev" 00:12:46.761 }, 00:12:46.761 "method": "bdev_xnvme_create" 00:12:46.761 }, 00:12:46.761 { 00:12:46.761 "method": "bdev_wait_for_examine" 00:12:46.761 } 00:12:46.761 ] 00:12:46.761 } 00:12:46.761 ] 00:12:46.761 } 00:12:47.023 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:47.023 fio-3.35 00:12:47.023 Starting 1 thread 00:12:53.616 00:12:53.616 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69285: Wed Nov 20 18:22:11 2024 00:12:53.616 write: IOPS=35.7k, BW=139MiB/s (146MB/s)(697MiB/5001msec); 0 zone resets 00:12:53.616 slat (usec): min=4, max=1635, avg=22.76, stdev=78.46 00:12:53.617 clat (usec): min=89, max=9746, avg=1176.91, stdev=555.78 00:12:53.617 lat (usec): min=185, max=9751, avg=1199.67, stdev=550.91 00:12:53.617 clat percentiles (usec): 00:12:53.617 | 1.00th=[ 255], 5.00th=[ 416], 10.00th=[ 537], 20.00th=[ 717], 00:12:53.617 | 30.00th=[ 865], 40.00th=[ 996], 50.00th=[ 1106], 60.00th=[ 1237], 00:12:53.617 | 70.00th=[ 1385], 80.00th=[ 1565], 90.00th=[ 1860], 95.00th=[ 2147], 00:12:53.617 | 99.00th=[ 2933], 99.50th=[ 3294], 99.90th=[ 3949], 99.95th=[ 4228], 00:12:53.617 | 99.99th=[ 6652] 00:12:53.617 bw ( KiB/s): min=126960, max=156872, per=99.47%, avg=141972.00, stdev=10677.86, samples=9 00:12:53.617 iops : min=31740, max=39218, avg=35493.00, stdev=2669.47, samples=9 00:12:53.617 lat (usec) : 100=0.01%, 250=0.92%, 500=7.33%, 750=13.98%, 1000=18.43% 00:12:53.617 lat (msec) : 2=52.25%, 4=7.00%, 10=0.09% 00:12:53.617 cpu : usr=31.16%, sys=55.30%, ctx=27, majf=0, minf=764 00:12:53.617 IO depths : 1=0.2%, 2=0.7%, 4=2.3%, 8=7.9%, 16=24.7%, 32=62.1%, >=64=2.0% 00:12:53.617 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:53.617 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.7%, >=64=0.0% 00:12:53.617 issued rwts: total=0,178447,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:53.617 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:53.617 00:12:53.617 Run status group 0 (all jobs): 00:12:53.617 WRITE: bw=139MiB/s (146MB/s), 139MiB/s-139MiB/s (146MB/s-146MB/s), io=697MiB (731MB), run=5001-5001msec 00:12:53.617 ----------------------------------------------------- 00:12:53.617 Suppressions used: 00:12:53.617 count bytes template 00:12:53.617 1 11 /usr/src/fio/parse.c 00:12:53.617 1 8 libtcmalloc_minimal.so 00:12:53.617 1 904 libcrypto.so 00:12:53.617 ----------------------------------------------------- 00:12:53.617 00:12:53.617 00:12:53.617 real 0m13.853s 00:12:53.617 user 0m6.189s 00:12:53.617 sys 0m6.149s 00:12:53.617 ************************************ 00:12:53.617 END TEST xnvme_fio_plugin 00:12:53.617 ************************************ 00:12:53.617 18:22:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:53.617 18:22:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:53.617 18:22:12 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:12:53.617 18:22:12 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:12:53.617 18:22:12 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:12:53.617 18:22:12 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:12:53.617 18:22:12 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:53.617 18:22:12 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:53.617 18:22:12 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:53.617 ************************************ 00:12:53.617 START TEST xnvme_rpc 00:12:53.617 ************************************ 00:12:53.617 18:22:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:12:53.617 18:22:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:12:53.617 18:22:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:12:53.617 18:22:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:12:53.617 18:22:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:12:53.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:53.617 18:22:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=69371 00:12:53.617 18:22:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 69371 00:12:53.617 18:22:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:53.617 18:22:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 69371 ']' 00:12:53.617 18:22:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:53.617 18:22:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:53.617 18:22:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:53.617 18:22:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:53.617 18:22:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.878 [2024-11-20 18:22:12.274159] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:12:53.878 [2024-11-20 18:22:12.274295] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69371 ] 00:12:53.878 [2024-11-20 18:22:12.438344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:54.139 [2024-11-20 18:22:12.558252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:54.712 18:22:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:54.712 18:22:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:54.712 18:22:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:12:54.712 18:22:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.712 18:22:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.712 xnvme_bdev 00:12:54.712 18:22:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.712 18:22:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:12:54.712 18:22:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:54.712 18:22:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:12:54.712 18:22:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.712 18:22:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.712 18:22:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.712 18:22:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:12:54.712 18:22:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:12:54.712 18:22:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:12:54.712 18:22:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:54.712 18:22:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.712 18:22:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.712 18:22:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.974 18:22:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:12:54.974 18:22:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:12:54.974 18:22:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:54.974 18:22:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.974 18:22:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.974 18:22:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:12:54.974 18:22:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.974 18:22:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:12:54.974 18:22:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:12:54.974 18:22:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:54.974 18:22:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:12:54.974 18:22:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.974 18:22:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.974 18:22:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.974 18:22:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:12:54.974 18:22:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:12:54.974 18:22:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.974 18:22:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.974 18:22:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.974 18:22:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 69371 00:12:54.974 18:22:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 69371 ']' 00:12:54.974 18:22:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 69371 00:12:54.974 18:22:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:12:54.974 18:22:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:54.974 18:22:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69371 00:12:54.974 killing process with pid 69371 00:12:54.974 18:22:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:54.974 18:22:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:54.974 18:22:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69371' 00:12:54.974 18:22:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 69371 00:12:54.974 18:22:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 69371 00:12:56.982 00:12:56.982 real 0m2.926s 00:12:56.982 user 0m2.911s 00:12:56.982 sys 0m0.484s 00:12:56.982 18:22:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:56.982 ************************************ 00:12:56.982 END TEST xnvme_rpc 00:12:56.982 ************************************ 00:12:56.982 18:22:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.982 18:22:15 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:56.982 18:22:15 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:56.982 18:22:15 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:56.982 18:22:15 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:56.982 ************************************ 00:12:56.982 START TEST xnvme_bdevperf 00:12:56.982 ************************************ 00:12:56.982 18:22:15 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:12:56.982 18:22:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:12:56.982 18:22:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:12:56.982 18:22:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:56.982 18:22:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:12:56.982 18:22:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:56.982 18:22:15 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:56.982 18:22:15 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:56.982 { 00:12:56.982 "subsystems": [ 00:12:56.982 { 00:12:56.982 "subsystem": "bdev", 00:12:56.982 "config": [ 00:12:56.982 { 00:12:56.982 "params": { 00:12:56.982 "io_mechanism": "libaio", 00:12:56.982 "conserve_cpu": true, 00:12:56.982 "filename": "/dev/nvme0n1", 00:12:56.982 "name": "xnvme_bdev" 00:12:56.982 }, 00:12:56.982 "method": "bdev_xnvme_create" 00:12:56.982 }, 00:12:56.982 { 00:12:56.982 "method": "bdev_wait_for_examine" 00:12:56.982 } 00:12:56.982 ] 00:12:56.982 } 00:12:56.982 ] 00:12:56.982 } 00:12:56.982 [2024-11-20 18:22:15.254445] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:12:56.982 [2024-11-20 18:22:15.254580] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69445 ] 00:12:56.982 [2024-11-20 18:22:15.418804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:56.982 [2024-11-20 18:22:15.538228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:57.244 Running I/O for 5 seconds... 00:12:59.576 31147.00 IOPS, 121.67 MiB/s [2024-11-20T18:22:19.147Z] 31744.00 IOPS, 124.00 MiB/s [2024-11-20T18:22:20.101Z] 32184.33 IOPS, 125.72 MiB/s [2024-11-20T18:22:21.044Z] 32074.75 IOPS, 125.29 MiB/s 00:13:02.415 Latency(us) 00:13:02.415 [2024-11-20T18:22:21.045Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:02.416 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:02.416 xnvme_bdev : 5.00 31834.80 124.35 0.00 0.00 2005.79 444.26 10838.65 00:13:02.416 [2024-11-20T18:22:21.045Z] =================================================================================================================== 00:13:02.416 [2024-11-20T18:22:21.045Z] Total : 31834.80 124.35 0.00 0.00 2005.79 444.26 10838.65 00:13:03.359 18:22:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:03.359 18:22:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:03.359 18:22:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:03.359 18:22:21 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:03.359 18:22:21 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:03.359 { 00:13:03.359 "subsystems": [ 00:13:03.359 { 00:13:03.359 "subsystem": "bdev", 00:13:03.359 "config": [ 00:13:03.359 { 00:13:03.359 "params": { 00:13:03.359 "io_mechanism": "libaio", 00:13:03.359 "conserve_cpu": true, 00:13:03.359 "filename": "/dev/nvme0n1", 00:13:03.359 "name": "xnvme_bdev" 00:13:03.359 }, 00:13:03.359 "method": "bdev_xnvme_create" 00:13:03.359 }, 00:13:03.359 { 00:13:03.359 "method": "bdev_wait_for_examine" 00:13:03.359 } 00:13:03.359 ] 00:13:03.359 } 00:13:03.359 ] 00:13:03.359 } 00:13:03.359 [2024-11-20 18:22:21.727951] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:13:03.359 [2024-11-20 18:22:21.728138] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69520 ] 00:13:03.359 [2024-11-20 18:22:21.895470] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:03.620 [2024-11-20 18:22:22.012960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:03.879 Running I/O for 5 seconds... 00:13:05.761 34341.00 IOPS, 134.14 MiB/s [2024-11-20T18:22:25.335Z] 33878.50 IOPS, 132.34 MiB/s [2024-11-20T18:22:26.724Z] 34439.33 IOPS, 134.53 MiB/s [2024-11-20T18:22:27.668Z] 34295.75 IOPS, 133.97 MiB/s 00:13:09.039 Latency(us) 00:13:09.039 [2024-11-20T18:22:27.668Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:09.039 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:09.039 xnvme_bdev : 5.00 34365.78 134.24 0.00 0.00 1858.03 191.41 9326.28 00:13:09.039 [2024-11-20T18:22:27.668Z] =================================================================================================================== 00:13:09.039 [2024-11-20T18:22:27.668Z] Total : 34365.78 134.24 0.00 0.00 1858.03 191.41 9326.28 00:13:09.611 00:13:09.611 real 0m12.930s 00:13:09.611 user 0m5.001s 00:13:09.611 sys 0m6.190s 00:13:09.611 18:22:28 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:09.611 ************************************ 00:13:09.611 END TEST xnvme_bdevperf 00:13:09.611 18:22:28 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:09.611 ************************************ 00:13:09.611 18:22:28 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:09.611 18:22:28 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:09.611 18:22:28 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:09.611 18:22:28 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:09.611 ************************************ 00:13:09.611 START TEST xnvme_fio_plugin 00:13:09.611 ************************************ 00:13:09.611 18:22:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:09.611 18:22:28 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:09.611 18:22:28 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:13:09.611 18:22:28 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:09.611 18:22:28 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:09.611 18:22:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:09.611 18:22:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:09.611 18:22:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:09.611 18:22:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:09.611 18:22:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:09.611 18:22:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:09.611 18:22:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:09.611 18:22:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:09.611 18:22:28 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:09.611 18:22:28 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:09.611 18:22:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:09.612 18:22:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:09.612 18:22:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:09.612 18:22:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:09.612 18:22:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:09.612 18:22:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:09.612 18:22:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:09.612 18:22:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:09.612 18:22:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:09.612 { 00:13:09.612 "subsystems": [ 00:13:09.612 { 00:13:09.612 "subsystem": "bdev", 00:13:09.612 "config": [ 00:13:09.612 { 00:13:09.612 "params": { 00:13:09.612 "io_mechanism": "libaio", 00:13:09.612 "conserve_cpu": true, 00:13:09.612 "filename": "/dev/nvme0n1", 00:13:09.612 "name": "xnvme_bdev" 00:13:09.612 }, 00:13:09.612 "method": "bdev_xnvme_create" 00:13:09.612 }, 00:13:09.612 { 00:13:09.612 "method": "bdev_wait_for_examine" 00:13:09.612 } 00:13:09.612 ] 00:13:09.612 } 00:13:09.612 ] 00:13:09.612 } 00:13:09.873 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:09.873 fio-3.35 00:13:09.873 Starting 1 thread 00:13:16.473 00:13:16.473 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69634: Wed Nov 20 18:22:34 2024 00:13:16.473 read: IOPS=33.2k, BW=130MiB/s (136MB/s)(649MiB/5001msec) 00:13:16.473 slat (usec): min=4, max=3989, avg=22.04, stdev=97.52 00:13:16.473 clat (usec): min=106, max=7418, avg=1340.15, stdev=521.15 00:13:16.473 lat (usec): min=194, max=7454, avg=1362.19, stdev=511.72 00:13:16.473 clat percentiles (usec): 00:13:16.473 | 1.00th=[ 281], 5.00th=[ 553], 10.00th=[ 701], 20.00th=[ 922], 00:13:16.473 | 30.00th=[ 1074], 40.00th=[ 1205], 50.00th=[ 1319], 60.00th=[ 1434], 00:13:16.473 | 70.00th=[ 1565], 80.00th=[ 1713], 90.00th=[ 1942], 95.00th=[ 2180], 00:13:16.473 | 99.00th=[ 2966], 99.50th=[ 3261], 99.90th=[ 3851], 99.95th=[ 4228], 00:13:16.473 | 99.99th=[ 6259] 00:13:16.473 bw ( KiB/s): min=119504, max=144400, per=100.00%, avg=132973.33, stdev=8072.74, samples=9 00:13:16.473 iops : min=29876, max=36100, avg=33243.33, stdev=2018.19, samples=9 00:13:16.473 lat (usec) : 250=0.69%, 500=3.15%, 750=8.01%, 1000=12.76% 00:13:16.473 lat (msec) : 2=67.01%, 4=8.30%, 10=0.08% 00:13:16.473 cpu : usr=40.30%, sys=51.36%, ctx=14, majf=0, minf=764 00:13:16.473 IO depths : 1=0.5%, 2=1.2%, 4=3.0%, 8=8.4%, 16=23.3%, 32=61.6%, >=64=2.1% 00:13:16.473 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:16.473 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.6%, >=64=0.0% 00:13:16.473 issued rwts: total=166043,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:16.473 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:16.473 00:13:16.473 Run status group 0 (all jobs): 00:13:16.473 READ: bw=130MiB/s (136MB/s), 130MiB/s-130MiB/s (136MB/s-136MB/s), io=649MiB (680MB), run=5001-5001msec 00:13:16.473 ----------------------------------------------------- 00:13:16.473 Suppressions used: 00:13:16.473 count bytes template 00:13:16.473 1 11 /usr/src/fio/parse.c 00:13:16.473 1 8 libtcmalloc_minimal.so 00:13:16.473 1 904 libcrypto.so 00:13:16.473 ----------------------------------------------------- 00:13:16.473 00:13:16.473 18:22:35 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:16.473 18:22:35 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:16.473 18:22:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:16.473 18:22:35 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:16.473 18:22:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:16.473 18:22:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:16.473 18:22:35 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:16.473 18:22:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:16.473 18:22:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:16.473 18:22:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:16.473 18:22:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:16.473 18:22:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:16.473 18:22:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:16.473 18:22:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:16.473 18:22:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:16.473 18:22:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:16.734 18:22:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:16.734 18:22:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:16.734 18:22:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:16.735 18:22:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:16.735 18:22:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:16.735 { 00:13:16.735 "subsystems": [ 00:13:16.735 { 00:13:16.735 "subsystem": "bdev", 00:13:16.735 "config": [ 00:13:16.735 { 00:13:16.735 "params": { 00:13:16.735 "io_mechanism": "libaio", 00:13:16.735 "conserve_cpu": true, 00:13:16.735 "filename": "/dev/nvme0n1", 00:13:16.735 "name": "xnvme_bdev" 00:13:16.735 }, 00:13:16.735 "method": "bdev_xnvme_create" 00:13:16.735 }, 00:13:16.735 { 00:13:16.735 "method": "bdev_wait_for_examine" 00:13:16.735 } 00:13:16.735 ] 00:13:16.735 } 00:13:16.735 ] 00:13:16.735 } 00:13:16.735 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:16.735 fio-3.35 00:13:16.735 Starting 1 thread 00:13:23.324 00:13:23.324 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69730: Wed Nov 20 18:22:40 2024 00:13:23.324 write: IOPS=35.1k, BW=137MiB/s (144MB/s)(686MiB/5001msec); 0 zone resets 00:13:23.324 slat (usec): min=4, max=2014, avg=21.05, stdev=88.28 00:13:23.324 clat (usec): min=107, max=5969, avg=1251.73, stdev=525.67 00:13:23.324 lat (usec): min=190, max=5976, avg=1272.78, stdev=519.11 00:13:23.324 clat percentiles (usec): 00:13:23.324 | 1.00th=[ 273], 5.00th=[ 474], 10.00th=[ 627], 20.00th=[ 824], 00:13:23.324 | 30.00th=[ 971], 40.00th=[ 1090], 50.00th=[ 1221], 60.00th=[ 1336], 00:13:23.324 | 70.00th=[ 1467], 80.00th=[ 1631], 90.00th=[ 1876], 95.00th=[ 2114], 00:13:23.324 | 99.00th=[ 2900], 99.50th=[ 3326], 99.90th=[ 3982], 99.95th=[ 4359], 00:13:23.324 | 99.99th=[ 4686] 00:13:23.324 bw ( KiB/s): min=134504, max=146040, per=100.00%, avg=141156.44, stdev=4487.11, samples=9 00:13:23.324 iops : min=33626, max=36510, avg=35289.11, stdev=1121.78, samples=9 00:13:23.324 lat (usec) : 250=0.74%, 500=4.90%, 750=10.36%, 1000=16.38% 00:13:23.324 lat (msec) : 2=60.56%, 4=6.97%, 10=0.09% 00:13:23.324 cpu : usr=40.48%, sys=50.52%, ctx=13, majf=0, minf=764 00:13:23.324 IO depths : 1=0.5%, 2=1.1%, 4=3.0%, 8=8.4%, 16=23.3%, 32=61.7%, >=64=2.1% 00:13:23.324 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:23.324 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.6%, >=64=0.0% 00:13:23.324 issued rwts: total=0,175599,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:23.324 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:23.324 00:13:23.324 Run status group 0 (all jobs): 00:13:23.324 WRITE: bw=137MiB/s (144MB/s), 137MiB/s-137MiB/s (144MB/s-144MB/s), io=686MiB (719MB), run=5001-5001msec 00:13:23.586 ----------------------------------------------------- 00:13:23.586 Suppressions used: 00:13:23.586 count bytes template 00:13:23.586 1 11 /usr/src/fio/parse.c 00:13:23.586 1 8 libtcmalloc_minimal.so 00:13:23.586 1 904 libcrypto.so 00:13:23.586 ----------------------------------------------------- 00:13:23.586 00:13:23.586 ************************************ 00:13:23.586 END TEST xnvme_fio_plugin 00:13:23.586 ************************************ 00:13:23.586 00:13:23.586 real 0m13.817s 00:13:23.586 user 0m6.838s 00:13:23.586 sys 0m5.723s 00:13:23.586 18:22:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:23.586 18:22:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:23.586 18:22:42 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:13:23.586 18:22:42 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:13:23.586 18:22:42 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:13:23.586 18:22:42 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:13:23.586 18:22:42 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:13:23.586 18:22:42 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:23.586 18:22:42 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:13:23.586 18:22:42 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:13:23.586 18:22:42 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:23.586 18:22:42 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:23.586 18:22:42 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:23.586 18:22:42 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:23.586 ************************************ 00:13:23.586 START TEST xnvme_rpc 00:13:23.586 ************************************ 00:13:23.587 18:22:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:23.587 18:22:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:23.587 18:22:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:23.587 18:22:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:23.587 18:22:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:23.587 18:22:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=69812 00:13:23.587 18:22:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 69812 00:13:23.587 18:22:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 69812 ']' 00:13:23.587 18:22:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:23.587 18:22:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:23.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:23.587 18:22:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:23.587 18:22:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:23.587 18:22:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.587 18:22:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:23.587 [2024-11-20 18:22:42.145690] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:13:23.587 [2024-11-20 18:22:42.145826] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69812 ] 00:13:23.848 [2024-11-20 18:22:42.307870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:23.848 [2024-11-20 18:22:42.428268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:24.791 18:22:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:24.791 18:22:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:24.791 18:22:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:13:24.791 18:22:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.791 18:22:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.791 xnvme_bdev 00:13:24.791 18:22:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.791 18:22:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:24.791 18:22:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:24.791 18:22:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:24.791 18:22:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.791 18:22:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.791 18:22:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.791 18:22:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:24.791 18:22:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:24.791 18:22:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:24.791 18:22:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:24.791 18:22:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.791 18:22:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.791 18:22:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.791 18:22:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:24.791 18:22:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:24.791 18:22:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:24.791 18:22:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.791 18:22:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.791 18:22:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:24.791 18:22:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.791 18:22:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:13:24.791 18:22:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:24.791 18:22:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:24.791 18:22:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.791 18:22:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.791 18:22:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:24.791 18:22:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.791 18:22:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:13:24.791 18:22:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:24.791 18:22:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.791 18:22:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.792 18:22:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.792 18:22:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 69812 00:13:24.792 18:22:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 69812 ']' 00:13:24.792 18:22:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 69812 00:13:24.792 18:22:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:24.792 18:22:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:24.792 18:22:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69812 00:13:24.792 killing process with pid 69812 00:13:24.792 18:22:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:24.792 18:22:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:24.792 18:22:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69812' 00:13:24.792 18:22:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 69812 00:13:24.792 18:22:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 69812 00:13:26.703 ************************************ 00:13:26.703 END TEST xnvme_rpc 00:13:26.703 ************************************ 00:13:26.703 00:13:26.703 real 0m2.898s 00:13:26.703 user 0m2.903s 00:13:26.703 sys 0m0.468s 00:13:26.703 18:22:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:26.703 18:22:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.703 18:22:45 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:26.703 18:22:45 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:26.703 18:22:45 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:26.703 18:22:45 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:26.703 ************************************ 00:13:26.703 START TEST xnvme_bdevperf 00:13:26.703 ************************************ 00:13:26.703 18:22:45 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:26.703 18:22:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:26.703 18:22:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:13:26.703 18:22:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:26.703 18:22:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:26.703 18:22:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:26.703 18:22:45 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:26.703 18:22:45 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:26.703 { 00:13:26.703 "subsystems": [ 00:13:26.703 { 00:13:26.703 "subsystem": "bdev", 00:13:26.703 "config": [ 00:13:26.703 { 00:13:26.703 "params": { 00:13:26.703 "io_mechanism": "io_uring", 00:13:26.703 "conserve_cpu": false, 00:13:26.703 "filename": "/dev/nvme0n1", 00:13:26.703 "name": "xnvme_bdev" 00:13:26.703 }, 00:13:26.703 "method": "bdev_xnvme_create" 00:13:26.703 }, 00:13:26.703 { 00:13:26.703 "method": "bdev_wait_for_examine" 00:13:26.703 } 00:13:26.703 ] 00:13:26.703 } 00:13:26.703 ] 00:13:26.703 } 00:13:26.703 [2024-11-20 18:22:45.100141] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:13:26.703 [2024-11-20 18:22:45.100486] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69886 ] 00:13:26.703 [2024-11-20 18:22:45.266217] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:26.964 [2024-11-20 18:22:45.387662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:27.224 Running I/O for 5 seconds... 00:13:29.108 33097.00 IOPS, 129.29 MiB/s [2024-11-20T18:22:48.680Z] 33855.50 IOPS, 132.25 MiB/s [2024-11-20T18:22:50.068Z] 33708.00 IOPS, 131.67 MiB/s [2024-11-20T18:22:51.009Z] 33610.50 IOPS, 131.29 MiB/s 00:13:32.380 Latency(us) 00:13:32.380 [2024-11-20T18:22:51.009Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:32.380 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:32.380 xnvme_bdev : 5.00 33497.86 130.85 0.00 0.00 1906.72 360.76 9578.34 00:13:32.380 [2024-11-20T18:22:51.009Z] =================================================================================================================== 00:13:32.380 [2024-11-20T18:22:51.009Z] Total : 33497.86 130.85 0.00 0.00 1906.72 360.76 9578.34 00:13:32.953 18:22:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:32.953 18:22:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:32.953 18:22:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:32.953 18:22:51 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:32.953 18:22:51 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:32.953 { 00:13:32.953 "subsystems": [ 00:13:32.953 { 00:13:32.953 "subsystem": "bdev", 00:13:32.953 "config": [ 00:13:32.953 { 00:13:32.953 "params": { 00:13:32.953 "io_mechanism": "io_uring", 00:13:32.953 "conserve_cpu": false, 00:13:32.953 "filename": "/dev/nvme0n1", 00:13:32.953 "name": "xnvme_bdev" 00:13:32.953 }, 00:13:32.953 "method": "bdev_xnvme_create" 00:13:32.953 }, 00:13:32.953 { 00:13:32.953 "method": "bdev_wait_for_examine" 00:13:32.953 } 00:13:32.953 ] 00:13:32.953 } 00:13:32.953 ] 00:13:32.953 } 00:13:32.953 [2024-11-20 18:22:51.517169] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:13:32.953 [2024-11-20 18:22:51.517688] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69962 ] 00:13:33.216 [2024-11-20 18:22:51.686950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:33.216 [2024-11-20 18:22:51.802598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:33.478 Running I/O for 5 seconds... 00:13:35.868 34660.00 IOPS, 135.39 MiB/s [2024-11-20T18:22:55.440Z] 36784.50 IOPS, 143.69 MiB/s [2024-11-20T18:22:56.383Z] 37695.67 IOPS, 147.25 MiB/s [2024-11-20T18:22:57.326Z] 37503.75 IOPS, 146.50 MiB/s 00:13:38.697 Latency(us) 00:13:38.697 [2024-11-20T18:22:57.326Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:38.697 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:38.697 xnvme_bdev : 5.00 37087.58 144.87 0.00 0.00 1722.01 354.46 8065.97 00:13:38.697 [2024-11-20T18:22:57.326Z] =================================================================================================================== 00:13:38.697 [2024-11-20T18:22:57.326Z] Total : 37087.58 144.87 0.00 0.00 1722.01 354.46 8065.97 00:13:39.267 00:13:39.267 real 0m12.845s 00:13:39.267 user 0m6.005s 00:13:39.267 sys 0m6.572s 00:13:39.267 ************************************ 00:13:39.267 END TEST xnvme_bdevperf 00:13:39.267 ************************************ 00:13:39.267 18:22:57 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:39.267 18:22:57 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:39.527 18:22:57 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:39.527 18:22:57 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:39.527 18:22:57 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:39.527 18:22:57 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:39.527 ************************************ 00:13:39.527 START TEST xnvme_fio_plugin 00:13:39.527 ************************************ 00:13:39.527 18:22:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:39.527 18:22:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:39.527 18:22:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:13:39.527 18:22:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:39.527 18:22:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:39.527 18:22:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:39.527 18:22:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:39.527 18:22:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:39.527 18:22:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:39.527 18:22:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:39.527 18:22:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:39.527 18:22:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:39.527 18:22:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:39.527 18:22:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:39.527 18:22:57 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:39.527 18:22:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:39.527 18:22:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:39.527 18:22:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:39.527 18:22:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:39.527 18:22:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:39.527 18:22:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:39.527 18:22:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:39.527 18:22:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:39.527 18:22:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:39.527 { 00:13:39.527 "subsystems": [ 00:13:39.527 { 00:13:39.527 "subsystem": "bdev", 00:13:39.527 "config": [ 00:13:39.527 { 00:13:39.527 "params": { 00:13:39.527 "io_mechanism": "io_uring", 00:13:39.527 "conserve_cpu": false, 00:13:39.527 "filename": "/dev/nvme0n1", 00:13:39.527 "name": "xnvme_bdev" 00:13:39.527 }, 00:13:39.527 "method": "bdev_xnvme_create" 00:13:39.527 }, 00:13:39.527 { 00:13:39.527 "method": "bdev_wait_for_examine" 00:13:39.527 } 00:13:39.527 ] 00:13:39.527 } 00:13:39.527 ] 00:13:39.527 } 00:13:39.527 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:39.527 fio-3.35 00:13:39.527 Starting 1 thread 00:13:46.118 00:13:46.118 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70076: Wed Nov 20 18:23:03 2024 00:13:46.118 read: IOPS=36.4k, BW=142MiB/s (149MB/s)(710MiB/5001msec) 00:13:46.118 slat (nsec): min=2706, max=78851, avg=3059.23, stdev=1502.33 00:13:46.118 clat (usec): min=813, max=4905, avg=1636.20, stdev=290.86 00:13:46.118 lat (usec): min=817, max=4912, avg=1639.26, stdev=291.06 00:13:46.118 clat percentiles (usec): 00:13:46.118 | 1.00th=[ 1172], 5.00th=[ 1254], 10.00th=[ 1303], 20.00th=[ 1385], 00:13:46.118 | 30.00th=[ 1450], 40.00th=[ 1516], 50.00th=[ 1598], 60.00th=[ 1680], 00:13:46.118 | 70.00th=[ 1762], 80.00th=[ 1860], 90.00th=[ 2008], 95.00th=[ 2147], 00:13:46.118 | 99.00th=[ 2442], 99.50th=[ 2573], 99.90th=[ 2900], 99.95th=[ 3097], 00:13:46.118 | 99.99th=[ 4555] 00:13:46.118 bw ( KiB/s): min=137728, max=159744, per=100.00%, avg=146830.22, stdev=7699.30, samples=9 00:13:46.118 iops : min=34432, max=39936, avg=36707.56, stdev=1924.82, samples=9 00:13:46.118 lat (usec) : 1000=0.03% 00:13:46.118 lat (msec) : 2=89.22%, 4=10.71%, 10=0.04% 00:13:46.118 cpu : usr=32.20%, sys=66.72%, ctx=10, majf=0, minf=762 00:13:46.118 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:13:46.118 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:46.118 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:13:46.118 issued rwts: total=181807,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:46.118 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:46.118 00:13:46.118 Run status group 0 (all jobs): 00:13:46.118 READ: bw=142MiB/s (149MB/s), 142MiB/s-142MiB/s (149MB/s-149MB/s), io=710MiB (745MB), run=5001-5001msec 00:13:46.379 ----------------------------------------------------- 00:13:46.379 Suppressions used: 00:13:46.379 count bytes template 00:13:46.379 1 11 /usr/src/fio/parse.c 00:13:46.379 1 8 libtcmalloc_minimal.so 00:13:46.379 1 904 libcrypto.so 00:13:46.379 ----------------------------------------------------- 00:13:46.379 00:13:46.379 18:23:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:46.379 18:23:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:46.379 18:23:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:46.379 18:23:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:46.379 18:23:04 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:46.379 18:23:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:46.379 18:23:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:46.379 18:23:04 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:46.379 18:23:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:46.379 18:23:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:46.379 18:23:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:46.379 18:23:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:46.379 18:23:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:46.379 18:23:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:46.379 18:23:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:46.379 18:23:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:46.379 18:23:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:46.379 18:23:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:46.379 18:23:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:46.379 18:23:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:46.379 18:23:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:46.379 { 00:13:46.379 "subsystems": [ 00:13:46.379 { 00:13:46.379 "subsystem": "bdev", 00:13:46.379 "config": [ 00:13:46.379 { 00:13:46.380 "params": { 00:13:46.380 "io_mechanism": "io_uring", 00:13:46.380 "conserve_cpu": false, 00:13:46.380 "filename": "/dev/nvme0n1", 00:13:46.380 "name": "xnvme_bdev" 00:13:46.380 }, 00:13:46.380 "method": "bdev_xnvme_create" 00:13:46.380 }, 00:13:46.380 { 00:13:46.380 "method": "bdev_wait_for_examine" 00:13:46.380 } 00:13:46.380 ] 00:13:46.380 } 00:13:46.380 ] 00:13:46.380 } 00:13:46.641 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:46.641 fio-3.35 00:13:46.641 Starting 1 thread 00:13:53.231 00:13:53.231 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70168: Wed Nov 20 18:23:10 2024 00:13:53.231 write: IOPS=34.7k, BW=136MiB/s (142MB/s)(678MiB/5002msec); 0 zone resets 00:13:53.231 slat (usec): min=2, max=109, avg= 3.59, stdev= 1.83 00:13:53.231 clat (usec): min=380, max=7348, avg=1699.57, stdev=283.17 00:13:53.231 lat (usec): min=383, max=7351, avg=1703.16, stdev=283.53 00:13:53.231 clat percentiles (usec): 00:13:53.231 | 1.00th=[ 1188], 5.00th=[ 1303], 10.00th=[ 1385], 20.00th=[ 1467], 00:13:53.231 | 30.00th=[ 1549], 40.00th=[ 1598], 50.00th=[ 1663], 60.00th=[ 1729], 00:13:53.231 | 70.00th=[ 1811], 80.00th=[ 1893], 90.00th=[ 2057], 95.00th=[ 2212], 00:13:53.231 | 99.00th=[ 2507], 99.50th=[ 2638], 99.90th=[ 3032], 99.95th=[ 3228], 00:13:53.231 | 99.99th=[ 5538] 00:13:53.231 bw ( KiB/s): min=133080, max=151064, per=100.00%, avg=139027.11, stdev=6484.93, samples=9 00:13:53.231 iops : min=33270, max=37766, avg=34756.78, stdev=1621.23, samples=9 00:13:53.231 lat (usec) : 500=0.01%, 750=0.03%, 1000=0.01% 00:13:53.231 lat (msec) : 2=87.19%, 4=12.75%, 10=0.02% 00:13:53.231 cpu : usr=31.93%, sys=66.91%, ctx=16, majf=0, minf=762 00:13:53.231 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.4%, 16=25.0%, 32=50.2%, >=64=1.6% 00:13:53.231 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:53.231 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:13:53.231 issued rwts: total=0,173591,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:53.231 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:53.231 00:13:53.231 Run status group 0 (all jobs): 00:13:53.231 WRITE: bw=136MiB/s (142MB/s), 136MiB/s-136MiB/s (142MB/s-142MB/s), io=678MiB (711MB), run=5002-5002msec 00:13:53.231 ----------------------------------------------------- 00:13:53.231 Suppressions used: 00:13:53.231 count bytes template 00:13:53.231 1 11 /usr/src/fio/parse.c 00:13:53.231 1 8 libtcmalloc_minimal.so 00:13:53.231 1 904 libcrypto.so 00:13:53.231 ----------------------------------------------------- 00:13:53.231 00:13:53.231 00:13:53.231 real 0m13.773s 00:13:53.231 user 0m6.065s 00:13:53.231 sys 0m7.272s 00:13:53.231 ************************************ 00:13:53.231 END TEST xnvme_fio_plugin 00:13:53.231 ************************************ 00:13:53.231 18:23:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:53.231 18:23:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:53.231 18:23:11 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:53.231 18:23:11 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:13:53.231 18:23:11 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:13:53.231 18:23:11 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:53.231 18:23:11 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:53.231 18:23:11 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:53.231 18:23:11 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:53.231 ************************************ 00:13:53.231 START TEST xnvme_rpc 00:13:53.231 ************************************ 00:13:53.231 18:23:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:53.231 18:23:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:53.231 18:23:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:53.231 18:23:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:53.231 18:23:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:53.231 18:23:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70255 00:13:53.231 18:23:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70255 00:13:53.231 18:23:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70255 ']' 00:13:53.231 18:23:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:53.231 18:23:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:53.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:53.231 18:23:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:53.231 18:23:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:53.231 18:23:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:53.231 18:23:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:53.492 [2024-11-20 18:23:11.879225] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:13:53.492 [2024-11-20 18:23:11.879551] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70255 ] 00:13:53.492 [2024-11-20 18:23:12.043594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:53.754 [2024-11-20 18:23:12.169633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:54.328 18:23:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:54.328 18:23:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:54.328 18:23:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:13:54.328 18:23:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.328 18:23:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:54.328 xnvme_bdev 00:13:54.328 18:23:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.328 18:23:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:54.328 18:23:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:54.328 18:23:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.328 18:23:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:54.328 18:23:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:54.328 18:23:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.328 18:23:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:54.328 18:23:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:54.328 18:23:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:54.328 18:23:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.328 18:23:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:54.328 18:23:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:54.328 18:23:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.328 18:23:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:54.328 18:23:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:54.328 18:23:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:54.328 18:23:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:54.328 18:23:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.328 18:23:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:54.590 18:23:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.590 18:23:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:13:54.590 18:23:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:54.590 18:23:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:54.590 18:23:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:54.590 18:23:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.590 18:23:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:54.590 18:23:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.590 18:23:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:13:54.590 18:23:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:54.590 18:23:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.590 18:23:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:54.590 18:23:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.590 18:23:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70255 00:13:54.590 18:23:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70255 ']' 00:13:54.590 18:23:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70255 00:13:54.590 18:23:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:54.590 18:23:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:54.590 18:23:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70255 00:13:54.590 18:23:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:54.590 18:23:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:54.590 killing process with pid 70255 00:13:54.590 18:23:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70255' 00:13:54.590 18:23:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70255 00:13:54.590 18:23:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70255 00:13:56.508 00:13:56.508 real 0m2.901s 00:13:56.508 user 0m2.929s 00:13:56.508 sys 0m0.468s 00:13:56.508 ************************************ 00:13:56.508 END TEST xnvme_rpc 00:13:56.508 ************************************ 00:13:56.508 18:23:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:56.508 18:23:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:56.508 18:23:14 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:56.508 18:23:14 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:56.508 18:23:14 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:56.508 18:23:14 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:56.508 ************************************ 00:13:56.508 START TEST xnvme_bdevperf 00:13:56.508 ************************************ 00:13:56.508 18:23:14 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:56.508 18:23:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:56.508 18:23:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:13:56.508 18:23:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:56.508 18:23:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:56.508 18:23:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:56.508 18:23:14 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:56.509 18:23:14 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:56.509 { 00:13:56.509 "subsystems": [ 00:13:56.509 { 00:13:56.509 "subsystem": "bdev", 00:13:56.509 "config": [ 00:13:56.509 { 00:13:56.509 "params": { 00:13:56.509 "io_mechanism": "io_uring", 00:13:56.509 "conserve_cpu": true, 00:13:56.509 "filename": "/dev/nvme0n1", 00:13:56.509 "name": "xnvme_bdev" 00:13:56.509 }, 00:13:56.509 "method": "bdev_xnvme_create" 00:13:56.509 }, 00:13:56.509 { 00:13:56.509 "method": "bdev_wait_for_examine" 00:13:56.509 } 00:13:56.509 ] 00:13:56.509 } 00:13:56.509 ] 00:13:56.509 } 00:13:56.509 [2024-11-20 18:23:14.826605] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:13:56.509 [2024-11-20 18:23:14.826775] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70330 ] 00:13:56.509 [2024-11-20 18:23:14.991336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:56.509 [2024-11-20 18:23:15.109430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:56.771 Running I/O for 5 seconds... 00:13:59.096 37200.00 IOPS, 145.31 MiB/s [2024-11-20T18:23:18.668Z] 36074.50 IOPS, 140.92 MiB/s [2024-11-20T18:23:19.611Z] 35631.33 IOPS, 139.18 MiB/s [2024-11-20T18:23:20.553Z] 35653.25 IOPS, 139.27 MiB/s [2024-11-20T18:23:20.554Z] 35443.60 IOPS, 138.45 MiB/s 00:14:01.925 Latency(us) 00:14:01.925 [2024-11-20T18:23:20.554Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:01.925 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:01.925 xnvme_bdev : 5.00 35440.29 138.44 0.00 0.00 1802.24 819.20 14619.57 00:14:01.925 [2024-11-20T18:23:20.554Z] =================================================================================================================== 00:14:01.925 [2024-11-20T18:23:20.554Z] Total : 35440.29 138.44 0.00 0.00 1802.24 819.20 14619.57 00:14:02.871 18:23:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:02.871 18:23:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:02.871 18:23:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:02.871 18:23:21 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:02.871 18:23:21 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:02.871 { 00:14:02.871 "subsystems": [ 00:14:02.871 { 00:14:02.871 "subsystem": "bdev", 00:14:02.871 "config": [ 00:14:02.871 { 00:14:02.871 "params": { 00:14:02.871 "io_mechanism": "io_uring", 00:14:02.871 "conserve_cpu": true, 00:14:02.871 "filename": "/dev/nvme0n1", 00:14:02.871 "name": "xnvme_bdev" 00:14:02.871 }, 00:14:02.871 "method": "bdev_xnvme_create" 00:14:02.871 }, 00:14:02.871 { 00:14:02.871 "method": "bdev_wait_for_examine" 00:14:02.871 } 00:14:02.871 ] 00:14:02.871 } 00:14:02.871 ] 00:14:02.871 } 00:14:02.871 [2024-11-20 18:23:21.231939] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:14:02.871 [2024-11-20 18:23:21.232143] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70405 ] 00:14:02.871 [2024-11-20 18:23:21.388081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:03.132 [2024-11-20 18:23:21.509618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:03.394 Running I/O for 5 seconds... 00:14:05.279 34768.00 IOPS, 135.81 MiB/s [2024-11-20T18:23:24.853Z] 34716.50 IOPS, 135.61 MiB/s [2024-11-20T18:23:25.796Z] 34735.33 IOPS, 135.68 MiB/s [2024-11-20T18:23:27.181Z] 34875.25 IOPS, 136.23 MiB/s 00:14:08.552 Latency(us) 00:14:08.552 [2024-11-20T18:23:27.181Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:08.552 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:08.552 xnvme_bdev : 5.00 34774.74 135.84 0.00 0.00 1836.50 677.42 8318.03 00:14:08.552 [2024-11-20T18:23:27.181Z] =================================================================================================================== 00:14:08.552 [2024-11-20T18:23:27.181Z] Total : 34774.74 135.84 0.00 0.00 1836.50 677.42 8318.03 00:14:09.124 00:14:09.124 real 0m12.821s 00:14:09.124 user 0m8.756s 00:14:09.124 sys 0m3.509s 00:14:09.124 18:23:27 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:09.124 ************************************ 00:14:09.124 END TEST xnvme_bdevperf 00:14:09.124 ************************************ 00:14:09.124 18:23:27 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:09.124 18:23:27 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:09.124 18:23:27 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:09.124 18:23:27 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:09.124 18:23:27 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:09.124 ************************************ 00:14:09.124 START TEST xnvme_fio_plugin 00:14:09.124 ************************************ 00:14:09.124 18:23:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:09.124 18:23:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:09.124 18:23:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:14:09.124 18:23:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:09.124 18:23:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:09.124 18:23:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:09.124 18:23:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:09.124 18:23:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:09.124 18:23:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:09.124 18:23:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:09.124 18:23:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:09.124 18:23:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:09.124 18:23:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:09.124 18:23:27 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:09.124 18:23:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:09.124 18:23:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:09.124 18:23:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:09.124 18:23:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:09.124 18:23:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:09.124 18:23:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:09.124 18:23:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:09.124 18:23:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:09.124 18:23:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:09.124 18:23:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:09.124 { 00:14:09.124 "subsystems": [ 00:14:09.124 { 00:14:09.124 "subsystem": "bdev", 00:14:09.124 "config": [ 00:14:09.124 { 00:14:09.124 "params": { 00:14:09.124 "io_mechanism": "io_uring", 00:14:09.124 "conserve_cpu": true, 00:14:09.124 "filename": "/dev/nvme0n1", 00:14:09.124 "name": "xnvme_bdev" 00:14:09.124 }, 00:14:09.124 "method": "bdev_xnvme_create" 00:14:09.124 }, 00:14:09.124 { 00:14:09.124 "method": "bdev_wait_for_examine" 00:14:09.124 } 00:14:09.124 ] 00:14:09.124 } 00:14:09.124 ] 00:14:09.124 } 00:14:09.386 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:09.386 fio-3.35 00:14:09.386 Starting 1 thread 00:14:15.968 00:14:15.968 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70519: Wed Nov 20 18:23:33 2024 00:14:15.968 read: IOPS=33.8k, BW=132MiB/s (139MB/s)(661MiB/5001msec) 00:14:15.968 slat (nsec): min=2732, max=67256, avg=3373.22, stdev=1739.41 00:14:15.968 clat (usec): min=872, max=3423, avg=1754.49, stdev=253.10 00:14:15.968 lat (usec): min=875, max=3455, avg=1757.86, stdev=253.43 00:14:15.968 clat percentiles (usec): 00:14:15.968 | 1.00th=[ 1303], 5.00th=[ 1401], 10.00th=[ 1450], 20.00th=[ 1532], 00:14:15.968 | 30.00th=[ 1598], 40.00th=[ 1663], 50.00th=[ 1729], 60.00th=[ 1795], 00:14:15.968 | 70.00th=[ 1860], 80.00th=[ 1958], 90.00th=[ 2089], 95.00th=[ 2212], 00:14:15.968 | 99.00th=[ 2442], 99.50th=[ 2540], 99.90th=[ 2835], 99.95th=[ 2999], 00:14:15.968 | 99.99th=[ 3294] 00:14:15.968 bw ( KiB/s): min=130048, max=140288, per=100.00%, avg=135793.78, stdev=3394.08, samples=9 00:14:15.968 iops : min=32512, max=35072, avg=33948.44, stdev=848.52, samples=9 00:14:15.968 lat (usec) : 1000=0.02% 00:14:15.968 lat (msec) : 2=83.64%, 4=16.34% 00:14:15.968 cpu : usr=58.26%, sys=38.18%, ctx=9, majf=0, minf=762 00:14:15.968 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:14:15.968 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:15.968 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:14:15.968 issued rwts: total=169131,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:15.968 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:15.968 00:14:15.968 Run status group 0 (all jobs): 00:14:15.968 READ: bw=132MiB/s (139MB/s), 132MiB/s-132MiB/s (139MB/s-139MB/s), io=661MiB (693MB), run=5001-5001msec 00:14:15.968 ----------------------------------------------------- 00:14:15.968 Suppressions used: 00:14:15.968 count bytes template 00:14:15.968 1 11 /usr/src/fio/parse.c 00:14:15.968 1 8 libtcmalloc_minimal.so 00:14:15.968 1 904 libcrypto.so 00:14:15.968 ----------------------------------------------------- 00:14:15.968 00:14:15.968 18:23:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:15.968 18:23:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:15.969 18:23:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:15.969 18:23:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:15.969 18:23:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:15.969 18:23:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:15.969 18:23:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:15.969 18:23:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:15.969 18:23:34 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:15.969 18:23:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:15.969 18:23:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:15.969 18:23:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:15.969 18:23:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:15.969 18:23:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:15.969 18:23:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:15.969 18:23:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:15.969 18:23:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:15.969 18:23:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:15.969 18:23:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:15.969 18:23:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:15.969 18:23:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:15.969 { 00:14:15.969 "subsystems": [ 00:14:15.969 { 00:14:15.969 "subsystem": "bdev", 00:14:15.969 "config": [ 00:14:15.969 { 00:14:15.969 "params": { 00:14:15.969 "io_mechanism": "io_uring", 00:14:15.969 "conserve_cpu": true, 00:14:15.969 "filename": "/dev/nvme0n1", 00:14:15.969 "name": "xnvme_bdev" 00:14:15.969 }, 00:14:15.969 "method": "bdev_xnvme_create" 00:14:15.969 }, 00:14:15.969 { 00:14:15.969 "method": "bdev_wait_for_examine" 00:14:15.969 } 00:14:15.969 ] 00:14:15.969 } 00:14:15.969 ] 00:14:15.969 } 00:14:16.256 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:16.256 fio-3.35 00:14:16.256 Starting 1 thread 00:14:22.871 00:14:22.871 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70611: Wed Nov 20 18:23:40 2024 00:14:22.871 write: IOPS=34.5k, BW=135MiB/s (141MB/s)(673MiB/5001msec); 0 zone resets 00:14:22.871 slat (usec): min=2, max=441, avg= 3.56, stdev= 2.04 00:14:22.871 clat (usec): min=1110, max=3998, avg=1713.65, stdev=245.34 00:14:22.871 lat (usec): min=1113, max=4001, avg=1717.21, stdev=245.68 00:14:22.871 clat percentiles (usec): 00:14:22.871 | 1.00th=[ 1303], 5.00th=[ 1385], 10.00th=[ 1434], 20.00th=[ 1516], 00:14:22.871 | 30.00th=[ 1565], 40.00th=[ 1614], 50.00th=[ 1680], 60.00th=[ 1745], 00:14:22.871 | 70.00th=[ 1811], 80.00th=[ 1893], 90.00th=[ 2040], 95.00th=[ 2180], 00:14:22.871 | 99.00th=[ 2442], 99.50th=[ 2573], 99.90th=[ 2868], 99.95th=[ 3228], 00:14:22.871 | 99.99th=[ 3523] 00:14:22.871 bw ( KiB/s): min=134339, max=145400, per=100.00%, avg=138326.56, stdev=3297.10, samples=9 00:14:22.871 iops : min=33584, max=36350, avg=34581.56, stdev=824.39, samples=9 00:14:22.871 lat (msec) : 2=88.19%, 4=11.81% 00:14:22.871 cpu : usr=62.16%, sys=34.38%, ctx=6, majf=0, minf=762 00:14:22.871 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:14:22.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:22.871 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:14:22.871 issued rwts: total=0,172345,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:22.871 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:22.871 00:14:22.871 Run status group 0 (all jobs): 00:14:22.871 WRITE: bw=135MiB/s (141MB/s), 135MiB/s-135MiB/s (141MB/s-141MB/s), io=673MiB (706MB), run=5001-5001msec 00:14:22.871 ----------------------------------------------------- 00:14:22.871 Suppressions used: 00:14:22.871 count bytes template 00:14:22.871 1 11 /usr/src/fio/parse.c 00:14:22.871 1 8 libtcmalloc_minimal.so 00:14:22.871 1 904 libcrypto.so 00:14:22.871 ----------------------------------------------------- 00:14:22.871 00:14:22.871 00:14:22.871 real 0m13.774s 00:14:22.871 user 0m8.869s 00:14:22.871 sys 0m4.219s 00:14:22.871 18:23:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:22.871 18:23:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:22.871 ************************************ 00:14:22.871 END TEST xnvme_fio_plugin 00:14:22.871 ************************************ 00:14:22.871 18:23:41 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:14:22.871 18:23:41 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:14:22.871 18:23:41 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:14:22.871 18:23:41 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:14:22.871 18:23:41 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:14:22.871 18:23:41 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:22.871 18:23:41 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:14:22.871 18:23:41 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:14:22.871 18:23:41 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:22.871 18:23:41 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:22.871 18:23:41 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:22.871 18:23:41 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:22.871 ************************************ 00:14:22.871 START TEST xnvme_rpc 00:14:22.871 ************************************ 00:14:22.871 18:23:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:22.871 18:23:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:22.871 18:23:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:22.871 18:23:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:22.871 18:23:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:22.871 18:23:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70697 00:14:22.871 18:23:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70697 00:14:22.871 18:23:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70697 ']' 00:14:22.871 18:23:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:22.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:22.871 18:23:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:22.871 18:23:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:22.871 18:23:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:22.871 18:23:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:22.871 18:23:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.133 [2024-11-20 18:23:41.568988] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:14:23.133 [2024-11-20 18:23:41.569154] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70697 ] 00:14:23.133 [2024-11-20 18:23:41.735540] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:23.394 [2024-11-20 18:23:41.854350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:23.967 18:23:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:23.967 18:23:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:23.967 18:23:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:14:23.967 18:23:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.967 18:23:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.967 xnvme_bdev 00:14:23.967 18:23:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.967 18:23:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:23.967 18:23:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:23.967 18:23:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:23.967 18:23:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.967 18:23:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.967 18:23:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.967 18:23:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:23.967 18:23:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:23.967 18:23:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:23.967 18:23:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:23.967 18:23:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.967 18:23:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:24.228 18:23:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.228 18:23:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:14:24.228 18:23:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:24.228 18:23:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:24.228 18:23:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:24.228 18:23:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.228 18:23:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:24.228 18:23:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.228 18:23:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:14:24.228 18:23:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:24.228 18:23:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:24.228 18:23:42 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:24.228 18:23:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.228 18:23:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:24.228 18:23:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.228 18:23:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:14:24.228 18:23:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:24.228 18:23:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.228 18:23:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:24.228 18:23:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.228 18:23:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70697 00:14:24.228 18:23:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70697 ']' 00:14:24.228 18:23:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70697 00:14:24.228 18:23:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:24.229 18:23:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:24.229 18:23:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70697 00:14:24.229 18:23:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:24.229 killing process with pid 70697 00:14:24.229 18:23:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:24.229 18:23:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70697' 00:14:24.229 18:23:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70697 00:14:24.229 18:23:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70697 00:14:26.140 00:14:26.140 real 0m2.915s 00:14:26.140 user 0m2.949s 00:14:26.140 sys 0m0.443s 00:14:26.140 18:23:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:26.140 ************************************ 00:14:26.140 END TEST xnvme_rpc 00:14:26.140 ************************************ 00:14:26.140 18:23:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.140 18:23:44 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:26.140 18:23:44 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:26.140 18:23:44 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:26.140 18:23:44 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:26.140 ************************************ 00:14:26.140 START TEST xnvme_bdevperf 00:14:26.140 ************************************ 00:14:26.140 18:23:44 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:26.140 18:23:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:26.140 18:23:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:14:26.140 18:23:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:26.140 18:23:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:26.140 18:23:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:26.140 18:23:44 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:26.140 18:23:44 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:26.140 { 00:14:26.140 "subsystems": [ 00:14:26.140 { 00:14:26.140 "subsystem": "bdev", 00:14:26.140 "config": [ 00:14:26.140 { 00:14:26.140 "params": { 00:14:26.140 "io_mechanism": "io_uring_cmd", 00:14:26.140 "conserve_cpu": false, 00:14:26.140 "filename": "/dev/ng0n1", 00:14:26.140 "name": "xnvme_bdev" 00:14:26.140 }, 00:14:26.140 "method": "bdev_xnvme_create" 00:14:26.140 }, 00:14:26.140 { 00:14:26.140 "method": "bdev_wait_for_examine" 00:14:26.140 } 00:14:26.140 ] 00:14:26.140 } 00:14:26.140 ] 00:14:26.140 } 00:14:26.140 [2024-11-20 18:23:44.521861] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:14:26.140 [2024-11-20 18:23:44.521983] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70765 ] 00:14:26.140 [2024-11-20 18:23:44.681707] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:26.401 [2024-11-20 18:23:44.782976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:26.401 Running I/O for 5 seconds... 00:14:28.731 37824.00 IOPS, 147.75 MiB/s [2024-11-20T18:23:48.306Z] 37056.00 IOPS, 144.75 MiB/s [2024-11-20T18:23:49.250Z] 36181.33 IOPS, 141.33 MiB/s [2024-11-20T18:23:50.192Z] 35707.75 IOPS, 139.48 MiB/s [2024-11-20T18:23:50.192Z] 35696.80 IOPS, 139.44 MiB/s 00:14:31.563 Latency(us) 00:14:31.563 [2024-11-20T18:23:50.192Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:31.563 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:31.563 xnvme_bdev : 5.00 35691.27 139.42 0.00 0.00 1789.60 387.54 8217.21 00:14:31.563 [2024-11-20T18:23:50.192Z] =================================================================================================================== 00:14:31.563 [2024-11-20T18:23:50.192Z] Total : 35691.27 139.42 0.00 0.00 1789.60 387.54 8217.21 00:14:32.508 18:23:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:32.508 18:23:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:32.508 18:23:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:32.508 18:23:50 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:32.508 18:23:50 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:32.508 { 00:14:32.508 "subsystems": [ 00:14:32.508 { 00:14:32.508 "subsystem": "bdev", 00:14:32.508 "config": [ 00:14:32.508 { 00:14:32.508 "params": { 00:14:32.508 "io_mechanism": "io_uring_cmd", 00:14:32.508 "conserve_cpu": false, 00:14:32.508 "filename": "/dev/ng0n1", 00:14:32.508 "name": "xnvme_bdev" 00:14:32.508 }, 00:14:32.508 "method": "bdev_xnvme_create" 00:14:32.508 }, 00:14:32.508 { 00:14:32.508 "method": "bdev_wait_for_examine" 00:14:32.508 } 00:14:32.508 ] 00:14:32.508 } 00:14:32.508 ] 00:14:32.508 } 00:14:32.508 [2024-11-20 18:23:50.850678] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:14:32.508 [2024-11-20 18:23:50.850847] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70840 ] 00:14:32.508 [2024-11-20 18:23:51.018238] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:32.770 [2024-11-20 18:23:51.137070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:33.032 Running I/O for 5 seconds... 00:14:34.923 38880.00 IOPS, 151.88 MiB/s [2024-11-20T18:23:54.494Z] 38366.00 IOPS, 149.87 MiB/s [2024-11-20T18:23:55.437Z] 37405.00 IOPS, 146.11 MiB/s [2024-11-20T18:23:56.822Z] 36907.00 IOPS, 144.17 MiB/s [2024-11-20T18:23:56.822Z] 37189.00 IOPS, 145.27 MiB/s 00:14:38.193 Latency(us) 00:14:38.193 [2024-11-20T18:23:56.822Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:38.193 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:38.193 xnvme_bdev : 5.00 37184.39 145.25 0.00 0.00 1717.45 123.67 10687.41 00:14:38.193 [2024-11-20T18:23:56.822Z] =================================================================================================================== 00:14:38.193 [2024-11-20T18:23:56.822Z] Total : 37184.39 145.25 0.00 0.00 1717.45 123.67 10687.41 00:14:38.766 18:23:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:38.766 18:23:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:14:38.766 18:23:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:38.766 18:23:57 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:38.766 18:23:57 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:38.766 { 00:14:38.766 "subsystems": [ 00:14:38.766 { 00:14:38.766 "subsystem": "bdev", 00:14:38.766 "config": [ 00:14:38.766 { 00:14:38.766 "params": { 00:14:38.766 "io_mechanism": "io_uring_cmd", 00:14:38.766 "conserve_cpu": false, 00:14:38.766 "filename": "/dev/ng0n1", 00:14:38.766 "name": "xnvme_bdev" 00:14:38.766 }, 00:14:38.766 "method": "bdev_xnvme_create" 00:14:38.766 }, 00:14:38.766 { 00:14:38.766 "method": "bdev_wait_for_examine" 00:14:38.766 } 00:14:38.766 ] 00:14:38.766 } 00:14:38.766 ] 00:14:38.766 } 00:14:38.766 [2024-11-20 18:23:57.286873] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:14:38.766 [2024-11-20 18:23:57.287019] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70914 ] 00:14:39.026 [2024-11-20 18:23:57.453955] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:39.027 [2024-11-20 18:23:57.585866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:39.287 Running I/O for 5 seconds... 00:14:41.250 79232.00 IOPS, 309.50 MiB/s [2024-11-20T18:24:01.264Z] 79264.00 IOPS, 309.62 MiB/s [2024-11-20T18:24:02.196Z] 79360.00 IOPS, 310.00 MiB/s [2024-11-20T18:24:03.132Z] 82256.00 IOPS, 321.31 MiB/s 00:14:44.503 Latency(us) 00:14:44.503 [2024-11-20T18:24:03.132Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:44.503 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:14:44.503 xnvme_bdev : 5.00 85059.12 332.26 0.00 0.00 749.05 529.33 2634.04 00:14:44.503 [2024-11-20T18:24:03.132Z] =================================================================================================================== 00:14:44.503 [2024-11-20T18:24:03.132Z] Total : 85059.12 332.26 0.00 0.00 749.05 529.33 2634.04 00:14:45.073 18:24:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:45.073 18:24:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:14:45.073 18:24:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:45.073 18:24:03 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:45.073 18:24:03 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:45.073 { 00:14:45.073 "subsystems": [ 00:14:45.073 { 00:14:45.073 "subsystem": "bdev", 00:14:45.073 "config": [ 00:14:45.073 { 00:14:45.073 "params": { 00:14:45.073 "io_mechanism": "io_uring_cmd", 00:14:45.073 "conserve_cpu": false, 00:14:45.073 "filename": "/dev/ng0n1", 00:14:45.073 "name": "xnvme_bdev" 00:14:45.073 }, 00:14:45.073 "method": "bdev_xnvme_create" 00:14:45.073 }, 00:14:45.073 { 00:14:45.073 "method": "bdev_wait_for_examine" 00:14:45.073 } 00:14:45.073 ] 00:14:45.073 } 00:14:45.073 ] 00:14:45.073 } 00:14:45.335 [2024-11-20 18:24:03.702364] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:14:45.335 [2024-11-20 18:24:03.702510] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70988 ] 00:14:45.335 [2024-11-20 18:24:03.866874] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:45.596 [2024-11-20 18:24:03.984645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:45.854 Running I/O for 5 seconds... 00:14:47.727 43470.00 IOPS, 169.80 MiB/s [2024-11-20T18:24:07.296Z] 41076.50 IOPS, 160.46 MiB/s [2024-11-20T18:24:08.674Z] 39000.33 IOPS, 152.35 MiB/s [2024-11-20T18:24:09.620Z] 36503.75 IOPS, 142.59 MiB/s [2024-11-20T18:24:09.620Z] 36225.00 IOPS, 141.50 MiB/s 00:14:50.991 Latency(us) 00:14:50.991 [2024-11-20T18:24:09.620Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:50.991 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:14:50.991 xnvme_bdev : 5.00 36214.41 141.46 0.00 0.00 1762.94 97.67 98404.82 00:14:50.991 [2024-11-20T18:24:09.620Z] =================================================================================================================== 00:14:50.991 [2024-11-20T18:24:09.620Z] Total : 36214.41 141.46 0.00 0.00 1762.94 97.67 98404.82 00:14:51.628 00:14:51.628 real 0m25.600s 00:14:51.628 user 0m13.881s 00:14:51.628 sys 0m11.220s 00:14:51.628 18:24:10 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:51.628 ************************************ 00:14:51.628 END TEST xnvme_bdevperf 00:14:51.628 ************************************ 00:14:51.628 18:24:10 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:51.628 18:24:10 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:51.628 18:24:10 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:51.628 18:24:10 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:51.628 18:24:10 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:51.628 ************************************ 00:14:51.628 START TEST xnvme_fio_plugin 00:14:51.628 ************************************ 00:14:51.628 18:24:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:51.628 18:24:10 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:51.628 18:24:10 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:14:51.628 18:24:10 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:51.628 18:24:10 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:51.628 18:24:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:51.628 18:24:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:51.628 18:24:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:51.628 18:24:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:51.628 18:24:10 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:51.628 18:24:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:51.628 18:24:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:51.628 18:24:10 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:51.628 18:24:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:51.628 18:24:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:51.628 18:24:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:51.629 18:24:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:51.629 18:24:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:51.629 18:24:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:51.629 18:24:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:51.629 18:24:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:51.629 18:24:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:51.629 18:24:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:51.629 18:24:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:51.629 { 00:14:51.629 "subsystems": [ 00:14:51.629 { 00:14:51.629 "subsystem": "bdev", 00:14:51.629 "config": [ 00:14:51.629 { 00:14:51.629 "params": { 00:14:51.629 "io_mechanism": "io_uring_cmd", 00:14:51.629 "conserve_cpu": false, 00:14:51.629 "filename": "/dev/ng0n1", 00:14:51.629 "name": "xnvme_bdev" 00:14:51.629 }, 00:14:51.629 "method": "bdev_xnvme_create" 00:14:51.629 }, 00:14:51.629 { 00:14:51.629 "method": "bdev_wait_for_examine" 00:14:51.629 } 00:14:51.629 ] 00:14:51.629 } 00:14:51.629 ] 00:14:51.629 } 00:14:51.890 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:51.890 fio-3.35 00:14:51.890 Starting 1 thread 00:14:58.479 00:14:58.479 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71107: Wed Nov 20 18:24:16 2024 00:14:58.479 read: IOPS=42.0k, BW=164MiB/s (172MB/s)(821MiB/5001msec) 00:14:58.479 slat (nsec): min=2728, max=83052, avg=3288.52, stdev=1712.73 00:14:58.479 clat (usec): min=895, max=4133, avg=1391.46, stdev=253.49 00:14:58.479 lat (usec): min=898, max=4166, avg=1394.75, stdev=254.06 00:14:58.479 clat percentiles (usec): 00:14:58.479 | 1.00th=[ 1020], 5.00th=[ 1090], 10.00th=[ 1123], 20.00th=[ 1188], 00:14:58.479 | 30.00th=[ 1237], 40.00th=[ 1287], 50.00th=[ 1336], 60.00th=[ 1401], 00:14:58.479 | 70.00th=[ 1483], 80.00th=[ 1582], 90.00th=[ 1745], 95.00th=[ 1893], 00:14:58.479 | 99.00th=[ 2180], 99.50th=[ 2278], 99.90th=[ 2507], 99.95th=[ 2638], 00:14:58.479 | 99.99th=[ 3916] 00:14:58.479 bw ( KiB/s): min=144896, max=189440, per=98.75%, avg=166058.67, stdev=18238.99, samples=9 00:14:58.479 iops : min=36224, max=47360, avg=41514.67, stdev=4559.75, samples=9 00:14:58.479 lat (usec) : 1000=0.56% 00:14:58.479 lat (msec) : 2=96.83%, 4=2.60%, 10=0.01% 00:14:58.479 cpu : usr=39.54%, sys=59.34%, ctx=13, majf=0, minf=762 00:14:58.479 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:14:58.479 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:58.479 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:14:58.479 issued rwts: total=210240,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:58.479 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:58.479 00:14:58.479 Run status group 0 (all jobs): 00:14:58.479 READ: bw=164MiB/s (172MB/s), 164MiB/s-164MiB/s (172MB/s-172MB/s), io=821MiB (861MB), run=5001-5001msec 00:14:58.479 ----------------------------------------------------- 00:14:58.479 Suppressions used: 00:14:58.479 count bytes template 00:14:58.479 1 11 /usr/src/fio/parse.c 00:14:58.479 1 8 libtcmalloc_minimal.so 00:14:58.479 1 904 libcrypto.so 00:14:58.479 ----------------------------------------------------- 00:14:58.479 00:14:58.479 18:24:17 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:58.479 18:24:17 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:58.479 18:24:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:58.479 18:24:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:58.479 18:24:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:58.479 18:24:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:58.479 18:24:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:58.479 18:24:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:58.479 18:24:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:58.479 18:24:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:58.479 18:24:17 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:58.479 18:24:17 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:58.479 18:24:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:58.479 18:24:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:58.479 18:24:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:58.479 18:24:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:58.479 18:24:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:58.479 18:24:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:58.479 18:24:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:58.479 18:24:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:58.479 18:24:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:58.479 { 00:14:58.479 "subsystems": [ 00:14:58.479 { 00:14:58.479 "subsystem": "bdev", 00:14:58.479 "config": [ 00:14:58.479 { 00:14:58.479 "params": { 00:14:58.479 "io_mechanism": "io_uring_cmd", 00:14:58.479 "conserve_cpu": false, 00:14:58.479 "filename": "/dev/ng0n1", 00:14:58.479 "name": "xnvme_bdev" 00:14:58.479 }, 00:14:58.479 "method": "bdev_xnvme_create" 00:14:58.479 }, 00:14:58.479 { 00:14:58.479 "method": "bdev_wait_for_examine" 00:14:58.479 } 00:14:58.479 ] 00:14:58.479 } 00:14:58.479 ] 00:14:58.479 } 00:14:58.741 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:58.741 fio-3.35 00:14:58.741 Starting 1 thread 00:15:05.325 00:15:05.325 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71198: Wed Nov 20 18:24:23 2024 00:15:05.325 write: IOPS=41.4k, BW=162MiB/s (170MB/s)(809MiB/5006msec); 0 zone resets 00:15:05.325 slat (usec): min=2, max=115, avg= 3.58, stdev= 1.79 00:15:05.325 clat (usec): min=147, max=8438, avg=1411.66, stdev=298.76 00:15:05.325 lat (usec): min=151, max=8441, avg=1415.24, stdev=299.07 00:15:05.325 clat percentiles (usec): 00:15:05.325 | 1.00th=[ 816], 5.00th=[ 1020], 10.00th=[ 1106], 20.00th=[ 1188], 00:15:05.325 | 30.00th=[ 1254], 40.00th=[ 1319], 50.00th=[ 1385], 60.00th=[ 1450], 00:15:05.325 | 70.00th=[ 1532], 80.00th=[ 1631], 90.00th=[ 1762], 95.00th=[ 1893], 00:15:05.325 | 99.00th=[ 2212], 99.50th=[ 2442], 99.90th=[ 3294], 99.95th=[ 4178], 00:15:05.325 | 99.99th=[ 6063] 00:15:05.325 bw ( KiB/s): min=144718, max=178728, per=100.00%, avg=165663.00, stdev=13523.04, samples=10 00:15:05.325 iops : min=36179, max=44682, avg=41415.70, stdev=3380.85, samples=10 00:15:05.325 lat (usec) : 250=0.01%, 500=0.19%, 750=0.41%, 1000=3.71% 00:15:05.325 lat (msec) : 2=92.90%, 4=2.72%, 10=0.06% 00:15:05.325 cpu : usr=39.96%, sys=58.92%, ctx=9, majf=0, minf=762 00:15:05.325 IO depths : 1=1.3%, 2=2.7%, 4=5.5%, 8=11.1%, 16=22.5%, 32=55.2%, >=64=1.8% 00:15:05.325 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:05.325 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.4%, >=64=0.0% 00:15:05.325 issued rwts: total=0,207158,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:05.325 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:05.325 00:15:05.325 Run status group 0 (all jobs): 00:15:05.325 WRITE: bw=162MiB/s (170MB/s), 162MiB/s-162MiB/s (170MB/s-170MB/s), io=809MiB (849MB), run=5006-5006msec 00:15:05.586 ----------------------------------------------------- 00:15:05.586 Suppressions used: 00:15:05.586 count bytes template 00:15:05.586 1 11 /usr/src/fio/parse.c 00:15:05.586 1 8 libtcmalloc_minimal.so 00:15:05.586 1 904 libcrypto.so 00:15:05.586 ----------------------------------------------------- 00:15:05.586 00:15:05.586 00:15:05.586 real 0m13.880s 00:15:05.586 user 0m6.841s 00:15:05.586 sys 0m6.598s 00:15:05.586 18:24:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:05.586 ************************************ 00:15:05.586 END TEST xnvme_fio_plugin 00:15:05.586 ************************************ 00:15:05.586 18:24:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:05.586 18:24:24 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:15:05.586 18:24:24 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:15:05.586 18:24:24 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:15:05.586 18:24:24 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:15:05.586 18:24:24 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:05.586 18:24:24 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:05.586 18:24:24 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:05.586 ************************************ 00:15:05.586 START TEST xnvme_rpc 00:15:05.586 ************************************ 00:15:05.586 18:24:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:15:05.586 18:24:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:15:05.586 18:24:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:15:05.586 18:24:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:15:05.586 18:24:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:15:05.586 18:24:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71278 00:15:05.586 18:24:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71278 00:15:05.586 18:24:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71278 ']' 00:15:05.586 18:24:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:05.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:05.586 18:24:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:05.586 18:24:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:05.586 18:24:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:05.586 18:24:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:05.586 18:24:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:05.586 [2024-11-20 18:24:24.177743] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:15:05.586 [2024-11-20 18:24:24.177890] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71278 ] 00:15:05.848 [2024-11-20 18:24:24.341809] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:05.848 [2024-11-20 18:24:24.462773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:06.792 18:24:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:06.792 18:24:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:15:06.792 18:24:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:15:06.792 18:24:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.792 18:24:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:06.792 xnvme_bdev 00:15:06.792 18:24:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.792 18:24:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:15:06.792 18:24:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:06.792 18:24:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:15:06.792 18:24:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.792 18:24:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:06.792 18:24:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.792 18:24:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:15:06.792 18:24:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:15:06.792 18:24:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:06.792 18:24:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.792 18:24:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:06.792 18:24:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:15:06.792 18:24:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.792 18:24:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:15:06.792 18:24:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:15:06.792 18:24:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:06.792 18:24:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.792 18:24:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:06.792 18:24:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:15:06.792 18:24:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.792 18:24:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:15:06.792 18:24:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:15:06.792 18:24:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:15:06.792 18:24:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:06.792 18:24:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.792 18:24:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:06.792 18:24:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.792 18:24:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:15:06.792 18:24:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:15:06.792 18:24:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.792 18:24:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:06.792 18:24:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.792 18:24:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71278 00:15:06.792 18:24:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71278 ']' 00:15:06.792 18:24:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71278 00:15:06.792 18:24:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:15:06.792 18:24:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:06.792 18:24:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71278 00:15:06.792 killing process with pid 71278 00:15:06.792 18:24:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:06.792 18:24:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:06.792 18:24:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71278' 00:15:06.792 18:24:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71278 00:15:06.792 18:24:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71278 00:15:08.707 ************************************ 00:15:08.707 END TEST xnvme_rpc 00:15:08.707 ************************************ 00:15:08.707 00:15:08.707 real 0m2.907s 00:15:08.707 user 0m2.887s 00:15:08.707 sys 0m0.498s 00:15:08.707 18:24:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:08.707 18:24:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:08.707 18:24:27 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:15:08.708 18:24:27 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:08.708 18:24:27 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:08.708 18:24:27 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:08.708 ************************************ 00:15:08.708 START TEST xnvme_bdevperf 00:15:08.708 ************************************ 00:15:08.708 18:24:27 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:15:08.708 18:24:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:15:08.708 18:24:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:15:08.708 18:24:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:08.708 18:24:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:15:08.708 18:24:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:08.708 18:24:27 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:08.708 18:24:27 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:08.708 { 00:15:08.708 "subsystems": [ 00:15:08.708 { 00:15:08.708 "subsystem": "bdev", 00:15:08.708 "config": [ 00:15:08.708 { 00:15:08.708 "params": { 00:15:08.708 "io_mechanism": "io_uring_cmd", 00:15:08.708 "conserve_cpu": true, 00:15:08.708 "filename": "/dev/ng0n1", 00:15:08.708 "name": "xnvme_bdev" 00:15:08.708 }, 00:15:08.708 "method": "bdev_xnvme_create" 00:15:08.708 }, 00:15:08.708 { 00:15:08.708 "method": "bdev_wait_for_examine" 00:15:08.708 } 00:15:08.708 ] 00:15:08.708 } 00:15:08.708 ] 00:15:08.708 } 00:15:08.708 [2024-11-20 18:24:27.152512] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:15:08.708 [2024-11-20 18:24:27.153059] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71352 ] 00:15:08.708 [2024-11-20 18:24:27.319969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:08.967 [2024-11-20 18:24:27.435895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:09.228 Running I/O for 5 seconds... 00:15:11.109 38335.00 IOPS, 149.75 MiB/s [2024-11-20T18:24:31.124Z] 41007.50 IOPS, 160.19 MiB/s [2024-11-20T18:24:32.066Z] 42074.67 IOPS, 164.35 MiB/s [2024-11-20T18:24:33.009Z] 41012.00 IOPS, 160.20 MiB/s 00:15:14.380 Latency(us) 00:15:14.380 [2024-11-20T18:24:33.009Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:14.380 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:15:14.380 xnvme_bdev : 5.00 42225.95 164.95 0.00 0.00 1512.05 869.61 4360.66 00:15:14.380 [2024-11-20T18:24:33.009Z] =================================================================================================================== 00:15:14.380 [2024-11-20T18:24:33.009Z] Total : 42225.95 164.95 0.00 0.00 1512.05 869.61 4360.66 00:15:14.954 18:24:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:14.954 18:24:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:15:14.954 18:24:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:14.954 18:24:33 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:14.954 18:24:33 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:14.954 { 00:15:14.954 "subsystems": [ 00:15:14.954 { 00:15:14.954 "subsystem": "bdev", 00:15:14.954 "config": [ 00:15:14.954 { 00:15:14.954 "params": { 00:15:14.954 "io_mechanism": "io_uring_cmd", 00:15:14.954 "conserve_cpu": true, 00:15:14.954 "filename": "/dev/ng0n1", 00:15:14.954 "name": "xnvme_bdev" 00:15:14.954 }, 00:15:14.954 "method": "bdev_xnvme_create" 00:15:14.954 }, 00:15:14.954 { 00:15:14.954 "method": "bdev_wait_for_examine" 00:15:14.954 } 00:15:14.954 ] 00:15:14.954 } 00:15:14.954 ] 00:15:14.954 } 00:15:15.215 [2024-11-20 18:24:33.594573] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:15:15.215 [2024-11-20 18:24:33.594897] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71426 ] 00:15:15.215 [2024-11-20 18:24:33.762711] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:15.476 [2024-11-20 18:24:33.887978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:15.737 Running I/O for 5 seconds... 00:15:17.625 39408.00 IOPS, 153.94 MiB/s [2024-11-20T18:24:37.196Z] 39561.50 IOPS, 154.54 MiB/s [2024-11-20T18:24:38.578Z] 40724.33 IOPS, 159.08 MiB/s [2024-11-20T18:24:39.521Z] 41146.00 IOPS, 160.73 MiB/s 00:15:20.892 Latency(us) 00:15:20.892 [2024-11-20T18:24:39.521Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:20.892 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:15:20.892 xnvme_bdev : 5.00 41493.82 162.09 0.00 0.00 1537.95 598.65 5595.77 00:15:20.892 [2024-11-20T18:24:39.521Z] =================================================================================================================== 00:15:20.892 [2024-11-20T18:24:39.521Z] Total : 41493.82 162.09 0.00 0.00 1537.95 598.65 5595.77 00:15:21.466 18:24:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:21.466 18:24:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:15:21.466 18:24:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:21.466 18:24:39 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:21.466 18:24:39 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:21.466 { 00:15:21.466 "subsystems": [ 00:15:21.466 { 00:15:21.466 "subsystem": "bdev", 00:15:21.466 "config": [ 00:15:21.466 { 00:15:21.466 "params": { 00:15:21.466 "io_mechanism": "io_uring_cmd", 00:15:21.466 "conserve_cpu": true, 00:15:21.466 "filename": "/dev/ng0n1", 00:15:21.466 "name": "xnvme_bdev" 00:15:21.466 }, 00:15:21.466 "method": "bdev_xnvme_create" 00:15:21.466 }, 00:15:21.466 { 00:15:21.466 "method": "bdev_wait_for_examine" 00:15:21.466 } 00:15:21.466 ] 00:15:21.466 } 00:15:21.466 ] 00:15:21.466 } 00:15:21.466 [2024-11-20 18:24:40.047648] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:15:21.466 [2024-11-20 18:24:40.047819] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71507 ] 00:15:21.728 [2024-11-20 18:24:40.215479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:21.728 [2024-11-20 18:24:40.335568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:22.302 Running I/O for 5 seconds... 00:15:24.191 76992.00 IOPS, 300.75 MiB/s [2024-11-20T18:24:43.762Z] 77088.00 IOPS, 301.12 MiB/s [2024-11-20T18:24:44.701Z] 77994.67 IOPS, 304.67 MiB/s [2024-11-20T18:24:45.646Z] 78544.00 IOPS, 306.81 MiB/s 00:15:27.017 Latency(us) 00:15:27.017 [2024-11-20T18:24:45.646Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:27.017 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:15:27.017 xnvme_bdev : 5.00 78444.83 306.43 0.00 0.00 812.41 415.90 2923.91 00:15:27.017 [2024-11-20T18:24:45.646Z] =================================================================================================================== 00:15:27.017 [2024-11-20T18:24:45.646Z] Total : 78444.83 306.43 0.00 0.00 812.41 415.90 2923.91 00:15:27.959 18:24:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:27.959 18:24:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:15:27.959 18:24:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:27.959 18:24:46 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:27.959 18:24:46 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:27.959 { 00:15:27.959 "subsystems": [ 00:15:27.959 { 00:15:27.959 "subsystem": "bdev", 00:15:27.959 "config": [ 00:15:27.959 { 00:15:27.959 "params": { 00:15:27.959 "io_mechanism": "io_uring_cmd", 00:15:27.959 "conserve_cpu": true, 00:15:27.959 "filename": "/dev/ng0n1", 00:15:27.959 "name": "xnvme_bdev" 00:15:27.959 }, 00:15:27.959 "method": "bdev_xnvme_create" 00:15:27.959 }, 00:15:27.959 { 00:15:27.959 "method": "bdev_wait_for_examine" 00:15:27.959 } 00:15:27.959 ] 00:15:27.959 } 00:15:27.959 ] 00:15:27.959 } 00:15:27.959 [2024-11-20 18:24:46.413449] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:15:27.959 [2024-11-20 18:24:46.413572] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71577 ] 00:15:27.959 [2024-11-20 18:24:46.576378] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:28.219 [2024-11-20 18:24:46.693993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:28.493 Running I/O for 5 seconds... 00:15:30.457 40947.00 IOPS, 159.95 MiB/s [2024-11-20T18:24:50.023Z] 41234.00 IOPS, 161.07 MiB/s [2024-11-20T18:24:51.400Z] 40528.67 IOPS, 158.32 MiB/s [2024-11-20T18:24:52.340Z] 39851.25 IOPS, 155.67 MiB/s [2024-11-20T18:24:52.340Z] 38588.00 IOPS, 150.73 MiB/s 00:15:33.711 Latency(us) 00:15:33.711 [2024-11-20T18:24:52.340Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:33.711 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:15:33.711 xnvme_bdev : 5.01 38532.04 150.52 0.00 0.00 1654.93 75.22 23290.49 00:15:33.711 [2024-11-20T18:24:52.340Z] =================================================================================================================== 00:15:33.711 [2024-11-20T18:24:52.340Z] Total : 38532.04 150.52 0.00 0.00 1654.93 75.22 23290.49 00:15:34.281 00:15:34.281 real 0m25.699s 00:15:34.281 user 0m17.798s 00:15:34.281 sys 0m5.693s 00:15:34.281 ************************************ 00:15:34.281 END TEST xnvme_bdevperf 00:15:34.281 ************************************ 00:15:34.281 18:24:52 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:34.281 18:24:52 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:34.281 18:24:52 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:15:34.281 18:24:52 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:34.281 18:24:52 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:34.281 18:24:52 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:34.281 ************************************ 00:15:34.281 START TEST xnvme_fio_plugin 00:15:34.281 ************************************ 00:15:34.281 18:24:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:15:34.281 18:24:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:15:34.281 18:24:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:15:34.281 18:24:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:34.281 18:24:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:34.281 18:24:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:34.281 18:24:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:34.281 18:24:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:34.281 18:24:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:34.281 18:24:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:34.281 18:24:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:34.281 18:24:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:34.281 18:24:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:34.281 18:24:52 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:34.281 18:24:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:34.281 18:24:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:34.281 18:24:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:34.281 18:24:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:34.281 18:24:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:34.281 18:24:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:34.281 18:24:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:34.281 18:24:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:34.281 18:24:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:34.281 18:24:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:34.281 { 00:15:34.281 "subsystems": [ 00:15:34.281 { 00:15:34.281 "subsystem": "bdev", 00:15:34.281 "config": [ 00:15:34.281 { 00:15:34.281 "params": { 00:15:34.281 "io_mechanism": "io_uring_cmd", 00:15:34.281 "conserve_cpu": true, 00:15:34.281 "filename": "/dev/ng0n1", 00:15:34.281 "name": "xnvme_bdev" 00:15:34.281 }, 00:15:34.281 "method": "bdev_xnvme_create" 00:15:34.281 }, 00:15:34.281 { 00:15:34.281 "method": "bdev_wait_for_examine" 00:15:34.281 } 00:15:34.281 ] 00:15:34.281 } 00:15:34.281 ] 00:15:34.281 } 00:15:34.542 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:34.542 fio-3.35 00:15:34.542 Starting 1 thread 00:15:41.127 00:15:41.127 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71694: Wed Nov 20 18:24:58 2024 00:15:41.127 read: IOPS=36.5k, BW=143MiB/s (149MB/s)(713MiB/5001msec) 00:15:41.127 slat (usec): min=2, max=151, avg= 3.61, stdev= 2.11 00:15:41.127 clat (usec): min=974, max=3515, avg=1605.05, stdev=256.09 00:15:41.127 lat (usec): min=977, max=3546, avg=1608.66, stdev=256.51 00:15:41.127 clat percentiles (usec): 00:15:41.127 | 1.00th=[ 1156], 5.00th=[ 1270], 10.00th=[ 1319], 20.00th=[ 1401], 00:15:41.127 | 30.00th=[ 1450], 40.00th=[ 1500], 50.00th=[ 1565], 60.00th=[ 1631], 00:15:41.127 | 70.00th=[ 1696], 80.00th=[ 1795], 90.00th=[ 1942], 95.00th=[ 2073], 00:15:41.127 | 99.00th=[ 2376], 99.50th=[ 2507], 99.90th=[ 2900], 99.95th=[ 3032], 00:15:41.127 | 99.99th=[ 3326] 00:15:41.127 bw ( KiB/s): min=139776, max=149504, per=100.00%, avg=146204.44, stdev=3348.72, samples=9 00:15:41.127 iops : min=34944, max=37376, avg=36551.11, stdev=837.18, samples=9 00:15:41.127 lat (usec) : 1000=0.01% 00:15:41.127 lat (msec) : 2=92.58%, 4=7.41% 00:15:41.127 cpu : usr=51.86%, sys=44.74%, ctx=22, majf=0, minf=762 00:15:41.127 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:15:41.127 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:41.127 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:15:41.127 issued rwts: total=182464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:41.127 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:41.127 00:15:41.127 Run status group 0 (all jobs): 00:15:41.127 READ: bw=143MiB/s (149MB/s), 143MiB/s-143MiB/s (149MB/s-149MB/s), io=713MiB (747MB), run=5001-5001msec 00:15:41.127 ----------------------------------------------------- 00:15:41.127 Suppressions used: 00:15:41.127 count bytes template 00:15:41.127 1 11 /usr/src/fio/parse.c 00:15:41.127 1 8 libtcmalloc_minimal.so 00:15:41.127 1 904 libcrypto.so 00:15:41.127 ----------------------------------------------------- 00:15:41.127 00:15:41.127 18:24:59 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:41.127 18:24:59 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:41.127 18:24:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:41.127 18:24:59 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:41.127 18:24:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:41.127 18:24:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:41.127 18:24:59 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:41.127 18:24:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:41.127 18:24:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:41.127 18:24:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:41.127 18:24:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:41.127 18:24:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:41.127 18:24:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:41.127 18:24:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:41.127 18:24:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:41.127 18:24:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:41.127 18:24:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:41.127 18:24:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:41.127 18:24:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:41.127 18:24:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:41.127 18:24:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:41.127 { 00:15:41.127 "subsystems": [ 00:15:41.127 { 00:15:41.127 "subsystem": "bdev", 00:15:41.127 "config": [ 00:15:41.127 { 00:15:41.127 "params": { 00:15:41.127 "io_mechanism": "io_uring_cmd", 00:15:41.127 "conserve_cpu": true, 00:15:41.127 "filename": "/dev/ng0n1", 00:15:41.127 "name": "xnvme_bdev" 00:15:41.127 }, 00:15:41.127 "method": "bdev_xnvme_create" 00:15:41.127 }, 00:15:41.127 { 00:15:41.127 "method": "bdev_wait_for_examine" 00:15:41.127 } 00:15:41.127 ] 00:15:41.127 } 00:15:41.127 ] 00:15:41.127 } 00:15:41.387 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:41.388 fio-3.35 00:15:41.388 Starting 1 thread 00:15:47.973 00:15:47.973 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71785: Wed Nov 20 18:25:05 2024 00:15:47.973 write: IOPS=37.1k, BW=145MiB/s (152MB/s)(726MiB/5001msec); 0 zone resets 00:15:47.973 slat (usec): min=2, max=106, avg= 4.10, stdev= 2.39 00:15:47.973 clat (usec): min=466, max=5137, avg=1556.14, stdev=252.94 00:15:47.973 lat (usec): min=469, max=5141, avg=1560.24, stdev=253.49 00:15:47.973 clat percentiles (usec): 00:15:47.973 | 1.00th=[ 1106], 5.00th=[ 1221], 10.00th=[ 1270], 20.00th=[ 1352], 00:15:47.973 | 30.00th=[ 1418], 40.00th=[ 1467], 50.00th=[ 1516], 60.00th=[ 1582], 00:15:47.973 | 70.00th=[ 1647], 80.00th=[ 1745], 90.00th=[ 1876], 95.00th=[ 1991], 00:15:47.973 | 99.00th=[ 2311], 99.50th=[ 2474], 99.90th=[ 3195], 99.95th=[ 3458], 00:15:47.973 | 99.99th=[ 3818] 00:15:47.973 bw ( KiB/s): min=141352, max=156408, per=99.86%, avg=148358.22, stdev=5249.12, samples=9 00:15:47.973 iops : min=35338, max=39102, avg=37089.56, stdev=1312.28, samples=9 00:15:47.973 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.11% 00:15:47.973 lat (msec) : 2=95.06%, 4=4.82%, 10=0.01% 00:15:47.973 cpu : usr=49.66%, sys=46.06%, ctx=12, majf=0, minf=762 00:15:47.973 IO depths : 1=1.5%, 2=3.0%, 4=6.1%, 8=12.5%, 16=25.0%, 32=50.2%, >=64=1.6% 00:15:47.973 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:47.973 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:15:47.973 issued rwts: total=0,185745,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:47.973 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:47.973 00:15:47.973 Run status group 0 (all jobs): 00:15:47.973 WRITE: bw=145MiB/s (152MB/s), 145MiB/s-145MiB/s (152MB/s-152MB/s), io=726MiB (761MB), run=5001-5001msec 00:15:47.973 ----------------------------------------------------- 00:15:47.973 Suppressions used: 00:15:47.973 count bytes template 00:15:47.973 1 11 /usr/src/fio/parse.c 00:15:47.973 1 8 libtcmalloc_minimal.so 00:15:47.973 1 904 libcrypto.so 00:15:47.973 ----------------------------------------------------- 00:15:47.973 00:15:47.973 ************************************ 00:15:47.973 END TEST xnvme_fio_plugin 00:15:47.973 ************************************ 00:15:47.973 00:15:47.973 real 0m13.737s 00:15:47.973 user 0m7.872s 00:15:47.973 sys 0m5.147s 00:15:47.973 18:25:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:47.973 18:25:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:48.234 Process with pid 71278 is not found 00:15:48.234 18:25:06 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 71278 00:15:48.234 18:25:06 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 71278 ']' 00:15:48.234 18:25:06 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 71278 00:15:48.234 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (71278) - No such process 00:15:48.234 18:25:06 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 71278 is not found' 00:15:48.234 00:15:48.234 real 3m31.302s 00:15:48.234 user 1m57.191s 00:15:48.234 sys 1m19.243s 00:15:48.234 18:25:06 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:48.234 18:25:06 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:48.234 ************************************ 00:15:48.234 END TEST nvme_xnvme 00:15:48.234 ************************************ 00:15:48.234 18:25:06 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:48.234 18:25:06 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:48.234 18:25:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:48.234 18:25:06 -- common/autotest_common.sh@10 -- # set +x 00:15:48.234 ************************************ 00:15:48.234 START TEST blockdev_xnvme 00:15:48.234 ************************************ 00:15:48.234 18:25:06 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:48.235 * Looking for test storage... 00:15:48.235 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:15:48.235 18:25:06 blockdev_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:48.235 18:25:06 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:15:48.235 18:25:06 blockdev_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:48.235 18:25:06 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:48.235 18:25:06 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:48.235 18:25:06 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:48.235 18:25:06 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:48.235 18:25:06 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:15:48.235 18:25:06 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:15:48.235 18:25:06 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:15:48.235 18:25:06 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:15:48.235 18:25:06 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:15:48.235 18:25:06 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:15:48.235 18:25:06 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:15:48.235 18:25:06 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:48.235 18:25:06 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:15:48.235 18:25:06 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:15:48.235 18:25:06 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:48.235 18:25:06 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:48.235 18:25:06 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:15:48.235 18:25:06 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:15:48.235 18:25:06 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:48.235 18:25:06 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:15:48.235 18:25:06 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:15:48.235 18:25:06 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:15:48.235 18:25:06 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:15:48.235 18:25:06 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:48.235 18:25:06 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:15:48.235 18:25:06 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:15:48.235 18:25:06 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:48.235 18:25:06 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:48.235 18:25:06 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:15:48.235 18:25:06 blockdev_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:48.235 18:25:06 blockdev_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:48.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.235 --rc genhtml_branch_coverage=1 00:15:48.235 --rc genhtml_function_coverage=1 00:15:48.235 --rc genhtml_legend=1 00:15:48.235 --rc geninfo_all_blocks=1 00:15:48.235 --rc geninfo_unexecuted_blocks=1 00:15:48.235 00:15:48.235 ' 00:15:48.235 18:25:06 blockdev_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:48.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.235 --rc genhtml_branch_coverage=1 00:15:48.235 --rc genhtml_function_coverage=1 00:15:48.235 --rc genhtml_legend=1 00:15:48.235 --rc geninfo_all_blocks=1 00:15:48.235 --rc geninfo_unexecuted_blocks=1 00:15:48.235 00:15:48.235 ' 00:15:48.235 18:25:06 blockdev_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:48.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.235 --rc genhtml_branch_coverage=1 00:15:48.235 --rc genhtml_function_coverage=1 00:15:48.235 --rc genhtml_legend=1 00:15:48.235 --rc geninfo_all_blocks=1 00:15:48.235 --rc geninfo_unexecuted_blocks=1 00:15:48.235 00:15:48.235 ' 00:15:48.235 18:25:06 blockdev_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:48.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.235 --rc genhtml_branch_coverage=1 00:15:48.235 --rc genhtml_function_coverage=1 00:15:48.235 --rc genhtml_legend=1 00:15:48.235 --rc geninfo_all_blocks=1 00:15:48.235 --rc geninfo_unexecuted_blocks=1 00:15:48.235 00:15:48.235 ' 00:15:48.235 18:25:06 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:15:48.235 18:25:06 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:15:48.235 18:25:06 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:15:48.235 18:25:06 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:48.235 18:25:06 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:15:48.235 18:25:06 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:15:48.235 18:25:06 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:15:48.235 18:25:06 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:15:48.235 18:25:06 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:15:48.235 18:25:06 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:15:48.235 18:25:06 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:15:48.235 18:25:06 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:15:48.235 18:25:06 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:15:48.235 18:25:06 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:15:48.235 18:25:06 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:15:48.235 18:25:06 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:15:48.235 18:25:06 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:15:48.235 18:25:06 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:15:48.235 18:25:06 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:15:48.235 18:25:06 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:15:48.235 18:25:06 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:15:48.235 18:25:06 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:15:48.235 18:25:06 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:15:48.235 18:25:06 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:15:48.235 18:25:06 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=71919 00:15:48.235 18:25:06 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:15:48.235 18:25:06 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:15:48.235 18:25:06 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 71919 00:15:48.496 18:25:06 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 71919 ']' 00:15:48.496 18:25:06 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:48.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:48.496 18:25:06 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:48.496 18:25:06 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:48.496 18:25:06 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:48.496 18:25:06 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:48.496 [2024-11-20 18:25:06.948531] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:15:48.497 [2024-11-20 18:25:06.948697] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71919 ] 00:15:48.497 [2024-11-20 18:25:07.112471] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:48.757 [2024-11-20 18:25:07.233732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:49.331 18:25:07 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:49.331 18:25:07 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:15:49.331 18:25:07 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:15:49.331 18:25:07 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:15:49.331 18:25:07 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:15:49.331 18:25:07 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:15:49.331 18:25:07 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:49.903 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:50.476 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:15:50.476 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:15:50.476 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:15:50.476 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:15:50.476 18:25:08 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:15:50.476 18:25:08 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:15:50.476 18:25:08 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:15:50.476 18:25:08 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local nvme bdf 00:15:50.476 18:25:08 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:50.476 18:25:08 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:15:50.476 18:25:08 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:15:50.476 18:25:08 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:15:50.476 18:25:08 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:50.476 18:25:08 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:50.476 18:25:08 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n2 00:15:50.476 18:25:08 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:15:50.476 18:25:08 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:15:50.476 18:25:08 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:50.476 18:25:08 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:50.476 18:25:08 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n3 00:15:50.476 18:25:08 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:15:50.476 18:25:08 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:15:50.476 18:25:08 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:50.476 18:25:08 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:50.476 18:25:08 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1c1n1 00:15:50.476 18:25:08 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1c1n1 00:15:50.476 18:25:08 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1c1n1/queue/zoned ]] 00:15:50.476 18:25:08 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:50.476 18:25:08 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:50.476 18:25:08 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:15:50.476 18:25:08 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:15:50.476 18:25:08 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:15:50.476 18:25:08 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:50.476 18:25:08 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:50.477 18:25:08 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:15:50.477 18:25:08 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:15:50.477 18:25:08 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:15:50.477 18:25:08 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:50.477 18:25:08 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:50.477 18:25:08 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:15:50.477 18:25:08 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:15:50.477 18:25:08 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:15:50.477 18:25:08 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:50.477 18:25:08 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:50.477 18:25:08 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:15:50.477 18:25:08 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:50.477 18:25:08 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:50.477 18:25:08 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:50.477 18:25:08 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:15:50.477 18:25:08 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:50.477 18:25:08 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:50.477 18:25:08 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:50.477 18:25:08 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:15:50.477 18:25:08 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:50.477 18:25:08 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:50.477 18:25:08 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:50.477 18:25:08 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:15:50.477 18:25:08 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:50.477 18:25:08 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:50.477 18:25:08 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:50.477 18:25:08 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:15:50.477 18:25:08 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:50.477 18:25:08 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:50.477 18:25:08 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:50.477 18:25:08 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:15:50.477 18:25:08 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:50.477 18:25:08 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:50.477 18:25:08 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:15:50.477 18:25:08 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:15:50.477 18:25:08 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.477 18:25:08 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:50.477 18:25:08 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:15:50.477 nvme0n1 00:15:50.477 nvme0n2 00:15:50.477 nvme0n3 00:15:50.477 nvme1n1 00:15:50.477 nvme2n1 00:15:50.477 nvme3n1 00:15:50.477 18:25:09 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.477 18:25:09 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:15:50.477 18:25:09 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.477 18:25:09 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:50.477 18:25:09 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.477 18:25:09 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:15:50.477 18:25:09 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:15:50.477 18:25:09 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.477 18:25:09 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:50.477 18:25:09 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.477 18:25:09 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:15:50.477 18:25:09 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.477 18:25:09 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:50.738 18:25:09 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.738 18:25:09 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:15:50.738 18:25:09 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.738 18:25:09 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:50.738 18:25:09 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.738 18:25:09 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:15:50.738 18:25:09 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:15:50.738 18:25:09 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.738 18:25:09 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:50.738 18:25:09 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:15:50.738 18:25:09 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.738 18:25:09 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:15:50.738 18:25:09 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:15:50.738 18:25:09 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "95630b87-c255-4228-849a-8ed90c28f973"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "95630b87-c255-4228-849a-8ed90c28f973",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "5927abee-5957-4bfc-ba0c-b2cffe5e16ae"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "5927abee-5957-4bfc-ba0c-b2cffe5e16ae",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "1b21a195-f412-40d6-af80-f9d608ecfa98"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "1b21a195-f412-40d6-af80-f9d608ecfa98",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "2e86029f-ebe0-4811-b91b-1d2195ff81cd"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "2e86029f-ebe0-4811-b91b-1d2195ff81cd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "857f1bb3-5409-421c-af89-385a0044e2e1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "857f1bb3-5409-421c-af89-385a0044e2e1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "5586cd95-5775-4567-91d9-c3addfa1edef"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "5586cd95-5775-4567-91d9-c3addfa1edef",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:15:50.738 18:25:09 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:15:50.738 18:25:09 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:15:50.738 18:25:09 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:15:50.738 18:25:09 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 71919 00:15:50.738 18:25:09 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 71919 ']' 00:15:50.738 18:25:09 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 71919 00:15:50.738 18:25:09 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:15:50.738 18:25:09 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:50.738 18:25:09 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71919 00:15:50.738 killing process with pid 71919 00:15:50.738 18:25:09 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:50.738 18:25:09 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:50.738 18:25:09 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71919' 00:15:50.738 18:25:09 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 71919 00:15:50.738 18:25:09 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 71919 00:15:52.650 18:25:10 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:52.650 18:25:10 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:52.650 18:25:10 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:52.650 18:25:10 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:52.650 18:25:10 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:52.650 ************************************ 00:15:52.650 START TEST bdev_hello_world 00:15:52.650 ************************************ 00:15:52.650 18:25:10 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:52.650 [2024-11-20 18:25:10.931581] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:15:52.650 [2024-11-20 18:25:10.931733] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72202 ] 00:15:52.650 [2024-11-20 18:25:11.095288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:52.650 [2024-11-20 18:25:11.211357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.222 [2024-11-20 18:25:11.604711] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:15:53.222 [2024-11-20 18:25:11.604775] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:15:53.222 [2024-11-20 18:25:11.604793] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:15:53.222 [2024-11-20 18:25:11.606890] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:15:53.222 [2024-11-20 18:25:11.607961] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:15:53.222 [2024-11-20 18:25:11.608167] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:15:53.222 [2024-11-20 18:25:11.608955] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:15:53.222 00:15:53.222 [2024-11-20 18:25:11.608991] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:15:53.794 ************************************ 00:15:53.794 END TEST bdev_hello_world 00:15:53.794 ************************************ 00:15:53.794 00:15:53.794 real 0m1.517s 00:15:53.794 user 0m1.139s 00:15:53.794 sys 0m0.233s 00:15:53.794 18:25:12 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:53.794 18:25:12 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:15:54.055 18:25:12 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:15:54.055 18:25:12 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:54.055 18:25:12 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:54.055 18:25:12 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:54.055 ************************************ 00:15:54.055 START TEST bdev_bounds 00:15:54.055 ************************************ 00:15:54.055 18:25:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:15:54.055 18:25:12 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=72234 00:15:54.055 18:25:12 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:15:54.055 18:25:12 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 72234' 00:15:54.055 Process bdevio pid: 72234 00:15:54.055 18:25:12 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 72234 00:15:54.055 18:25:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 72234 ']' 00:15:54.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:54.055 18:25:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:54.055 18:25:12 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:54.055 18:25:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:54.055 18:25:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:54.055 18:25:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:54.055 18:25:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:54.055 [2024-11-20 18:25:12.520656] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:15:54.055 [2024-11-20 18:25:12.521020] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72234 ] 00:15:54.317 [2024-11-20 18:25:12.687358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:54.317 [2024-11-20 18:25:12.814500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:54.317 [2024-11-20 18:25:12.814786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:54.317 [2024-11-20 18:25:12.814798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:54.889 18:25:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:54.889 18:25:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:15:54.889 18:25:13 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:15:54.889 I/O targets: 00:15:54.889 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:54.889 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:54.889 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:54.889 nvme1n1: 262144 blocks of 4096 bytes (1024 MiB) 00:15:54.889 nvme2n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:15:54.889 nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:15:54.889 00:15:54.889 00:15:54.889 CUnit - A unit testing framework for C - Version 2.1-3 00:15:54.889 http://cunit.sourceforge.net/ 00:15:54.889 00:15:54.889 00:15:54.889 Suite: bdevio tests on: nvme3n1 00:15:54.889 Test: blockdev write read block ...passed 00:15:54.889 Test: blockdev write zeroes read block ...passed 00:15:54.889 Test: blockdev write zeroes read no split ...passed 00:15:55.150 Test: blockdev write zeroes read split ...passed 00:15:55.150 Test: blockdev write zeroes read split partial ...passed 00:15:55.150 Test: blockdev reset ...passed 00:15:55.150 Test: blockdev write read 8 blocks ...passed 00:15:55.150 Test: blockdev write read size > 128k ...passed 00:15:55.150 Test: blockdev write read invalid size ...passed 00:15:55.150 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:55.150 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:55.150 Test: blockdev write read max offset ...passed 00:15:55.150 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:55.150 Test: blockdev writev readv 8 blocks ...passed 00:15:55.150 Test: blockdev writev readv 30 x 1block ...passed 00:15:55.150 Test: blockdev writev readv block ...passed 00:15:55.150 Test: blockdev writev readv size > 128k ...passed 00:15:55.150 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:55.150 Test: blockdev comparev and writev ...passed 00:15:55.150 Test: blockdev nvme passthru rw ...passed 00:15:55.150 Test: blockdev nvme passthru vendor specific ...passed 00:15:55.150 Test: blockdev nvme admin passthru ...passed 00:15:55.150 Test: blockdev copy ...passed 00:15:55.150 Suite: bdevio tests on: nvme2n1 00:15:55.150 Test: blockdev write read block ...passed 00:15:55.150 Test: blockdev write zeroes read block ...passed 00:15:55.150 Test: blockdev write zeroes read no split ...passed 00:15:55.150 Test: blockdev write zeroes read split ...passed 00:15:55.150 Test: blockdev write zeroes read split partial ...passed 00:15:55.150 Test: blockdev reset ...passed 00:15:55.150 Test: blockdev write read 8 blocks ...passed 00:15:55.150 Test: blockdev write read size > 128k ...passed 00:15:55.150 Test: blockdev write read invalid size ...passed 00:15:55.150 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:55.150 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:55.150 Test: blockdev write read max offset ...passed 00:15:55.150 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:55.150 Test: blockdev writev readv 8 blocks ...passed 00:15:55.150 Test: blockdev writev readv 30 x 1block ...passed 00:15:55.150 Test: blockdev writev readv block ...passed 00:15:55.150 Test: blockdev writev readv size > 128k ...passed 00:15:55.150 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:55.150 Test: blockdev comparev and writev ...passed 00:15:55.150 Test: blockdev nvme passthru rw ...passed 00:15:55.150 Test: blockdev nvme passthru vendor specific ...passed 00:15:55.150 Test: blockdev nvme admin passthru ...passed 00:15:55.150 Test: blockdev copy ...passed 00:15:55.150 Suite: bdevio tests on: nvme1n1 00:15:55.150 Test: blockdev write read block ...passed 00:15:55.150 Test: blockdev write zeroes read block ...passed 00:15:55.150 Test: blockdev write zeroes read no split ...passed 00:15:55.150 Test: blockdev write zeroes read split ...passed 00:15:55.150 Test: blockdev write zeroes read split partial ...passed 00:15:55.150 Test: blockdev reset ...passed 00:15:55.150 Test: blockdev write read 8 blocks ...passed 00:15:55.150 Test: blockdev write read size > 128k ...passed 00:15:55.150 Test: blockdev write read invalid size ...passed 00:15:55.150 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:55.150 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:55.150 Test: blockdev write read max offset ...passed 00:15:55.150 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:55.150 Test: blockdev writev readv 8 blocks ...passed 00:15:55.150 Test: blockdev writev readv 30 x 1block ...passed 00:15:55.150 Test: blockdev writev readv block ...passed 00:15:55.150 Test: blockdev writev readv size > 128k ...passed 00:15:55.150 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:55.150 Test: blockdev comparev and writev ...passed 00:15:55.150 Test: blockdev nvme passthru rw ...passed 00:15:55.150 Test: blockdev nvme passthru vendor specific ...passed 00:15:55.150 Test: blockdev nvme admin passthru ...passed 00:15:55.150 Test: blockdev copy ...passed 00:15:55.151 Suite: bdevio tests on: nvme0n3 00:15:55.151 Test: blockdev write read block ...passed 00:15:55.151 Test: blockdev write zeroes read block ...passed 00:15:55.151 Test: blockdev write zeroes read no split ...passed 00:15:55.413 Test: blockdev write zeroes read split ...passed 00:15:55.413 Test: blockdev write zeroes read split partial ...passed 00:15:55.413 Test: blockdev reset ...passed 00:15:55.413 Test: blockdev write read 8 blocks ...passed 00:15:55.413 Test: blockdev write read size > 128k ...passed 00:15:55.413 Test: blockdev write read invalid size ...passed 00:15:55.413 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:55.413 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:55.413 Test: blockdev write read max offset ...passed 00:15:55.413 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:55.413 Test: blockdev writev readv 8 blocks ...passed 00:15:55.413 Test: blockdev writev readv 30 x 1block ...passed 00:15:55.413 Test: blockdev writev readv block ...passed 00:15:55.413 Test: blockdev writev readv size > 128k ...passed 00:15:55.413 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:55.413 Test: blockdev comparev and writev ...passed 00:15:55.413 Test: blockdev nvme passthru rw ...passed 00:15:55.413 Test: blockdev nvme passthru vendor specific ...passed 00:15:55.413 Test: blockdev nvme admin passthru ...passed 00:15:55.413 Test: blockdev copy ...passed 00:15:55.413 Suite: bdevio tests on: nvme0n2 00:15:55.413 Test: blockdev write read block ...passed 00:15:55.413 Test: blockdev write zeroes read block ...passed 00:15:55.413 Test: blockdev write zeroes read no split ...passed 00:15:55.413 Test: blockdev write zeroes read split ...passed 00:15:55.413 Test: blockdev write zeroes read split partial ...passed 00:15:55.413 Test: blockdev reset ...passed 00:15:55.413 Test: blockdev write read 8 blocks ...passed 00:15:55.413 Test: blockdev write read size > 128k ...passed 00:15:55.413 Test: blockdev write read invalid size ...passed 00:15:55.413 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:55.413 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:55.413 Test: blockdev write read max offset ...passed 00:15:55.413 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:55.413 Test: blockdev writev readv 8 blocks ...passed 00:15:55.413 Test: blockdev writev readv 30 x 1block ...passed 00:15:55.413 Test: blockdev writev readv block ...passed 00:15:55.413 Test: blockdev writev readv size > 128k ...passed 00:15:55.413 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:55.413 Test: blockdev comparev and writev ...passed 00:15:55.413 Test: blockdev nvme passthru rw ...passed 00:15:55.413 Test: blockdev nvme passthru vendor specific ...passed 00:15:55.413 Test: blockdev nvme admin passthru ...passed 00:15:55.413 Test: blockdev copy ...passed 00:15:55.413 Suite: bdevio tests on: nvme0n1 00:15:55.413 Test: blockdev write read block ...passed 00:15:55.413 Test: blockdev write zeroes read block ...passed 00:15:55.413 Test: blockdev write zeroes read no split ...passed 00:15:55.413 Test: blockdev write zeroes read split ...passed 00:15:55.413 Test: blockdev write zeroes read split partial ...passed 00:15:55.413 Test: blockdev reset ...passed 00:15:55.413 Test: blockdev write read 8 blocks ...passed 00:15:55.413 Test: blockdev write read size > 128k ...passed 00:15:55.413 Test: blockdev write read invalid size ...passed 00:15:55.413 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:55.413 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:55.413 Test: blockdev write read max offset ...passed 00:15:55.413 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:55.413 Test: blockdev writev readv 8 blocks ...passed 00:15:55.413 Test: blockdev writev readv 30 x 1block ...passed 00:15:55.413 Test: blockdev writev readv block ...passed 00:15:55.413 Test: blockdev writev readv size > 128k ...passed 00:15:55.413 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:55.413 Test: blockdev comparev and writev ...passed 00:15:55.413 Test: blockdev nvme passthru rw ...passed 00:15:55.413 Test: blockdev nvme passthru vendor specific ...passed 00:15:55.413 Test: blockdev nvme admin passthru ...passed 00:15:55.413 Test: blockdev copy ...passed 00:15:55.413 00:15:55.413 Run Summary: Type Total Ran Passed Failed Inactive 00:15:55.413 suites 6 6 n/a 0 0 00:15:55.413 tests 138 138 138 0 0 00:15:55.413 asserts 780 780 780 0 n/a 00:15:55.413 00:15:55.413 Elapsed time = 1.278 seconds 00:15:55.413 0 00:15:55.413 18:25:13 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 72234 00:15:55.413 18:25:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 72234 ']' 00:15:55.413 18:25:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 72234 00:15:55.413 18:25:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:15:55.413 18:25:14 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:55.413 18:25:14 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72234 00:15:55.413 killing process with pid 72234 00:15:55.413 18:25:14 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:55.413 18:25:14 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:55.413 18:25:14 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72234' 00:15:55.413 18:25:14 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 72234 00:15:55.413 18:25:14 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 72234 00:15:56.359 ************************************ 00:15:56.359 END TEST bdev_bounds 00:15:56.359 18:25:14 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:15:56.359 00:15:56.359 real 0m2.336s 00:15:56.359 user 0m5.734s 00:15:56.359 sys 0m0.367s 00:15:56.359 18:25:14 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:56.359 18:25:14 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:56.359 ************************************ 00:15:56.359 18:25:14 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:15:56.359 18:25:14 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:56.359 18:25:14 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:56.359 18:25:14 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:56.359 ************************************ 00:15:56.359 START TEST bdev_nbd 00:15:56.359 ************************************ 00:15:56.359 18:25:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:15:56.359 18:25:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:15:56.359 18:25:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:15:56.359 18:25:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:56.359 18:25:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:56.359 18:25:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:56.359 18:25:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:15:56.359 18:25:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:15:56.359 18:25:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:15:56.359 18:25:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:15:56.359 18:25:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:15:56.359 18:25:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:15:56.359 18:25:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:56.359 18:25:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:15:56.359 18:25:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:56.359 18:25:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:15:56.359 18:25:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=72296 00:15:56.359 18:25:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:15:56.359 18:25:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 72296 /var/tmp/spdk-nbd.sock 00:15:56.359 18:25:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 72296 ']' 00:15:56.359 18:25:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:15:56.359 18:25:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:56.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:15:56.359 18:25:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:15:56.359 18:25:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:56.359 18:25:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:56.359 18:25:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:56.359 [2024-11-20 18:25:14.928956] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:15:56.359 [2024-11-20 18:25:14.929133] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:56.619 [2024-11-20 18:25:15.091240] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:56.619 [2024-11-20 18:25:15.174993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:57.186 18:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:57.186 18:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:15:57.186 18:25:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:15:57.186 18:25:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:57.186 18:25:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:57.186 18:25:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:15:57.186 18:25:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:15:57.186 18:25:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:57.186 18:25:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:57.186 18:25:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:15:57.186 18:25:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:15:57.186 18:25:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:15:57.186 18:25:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:15:57.186 18:25:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:57.186 18:25:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:15:57.445 18:25:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:15:57.445 18:25:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:15:57.445 18:25:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:15:57.445 18:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:57.445 18:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:57.445 18:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:57.445 18:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:57.445 18:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:57.445 18:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:57.445 18:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:57.445 18:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:57.445 18:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:57.445 1+0 records in 00:15:57.445 1+0 records out 00:15:57.445 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000331347 s, 12.4 MB/s 00:15:57.445 18:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:57.445 18:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:57.445 18:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:57.445 18:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:57.445 18:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:57.445 18:25:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:57.445 18:25:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:57.445 18:25:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:15:57.704 18:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:15:57.704 18:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:15:57.704 18:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:15:57.704 18:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:57.704 18:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:57.704 18:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:57.704 18:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:57.704 18:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:57.704 18:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:57.704 18:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:57.704 18:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:57.704 18:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:57.704 1+0 records in 00:15:57.704 1+0 records out 00:15:57.704 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000460066 s, 8.9 MB/s 00:15:57.704 18:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:57.704 18:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:57.704 18:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:57.704 18:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:57.704 18:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:57.704 18:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:57.704 18:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:57.704 18:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:15:57.963 18:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:15:57.963 18:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:15:57.963 18:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:15:57.963 18:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:15:57.963 18:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:57.963 18:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:57.963 18:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:57.963 18:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:15:57.963 18:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:57.963 18:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:57.963 18:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:57.963 18:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:57.963 1+0 records in 00:15:57.963 1+0 records out 00:15:57.963 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000391649 s, 10.5 MB/s 00:15:57.963 18:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:57.963 18:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:57.963 18:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:57.963 18:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:57.963 18:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:57.963 18:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:57.963 18:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:57.963 18:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:15:58.221 18:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:15:58.221 18:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:15:58.221 18:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:15:58.221 18:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:15:58.221 18:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:58.221 18:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:58.221 18:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:58.221 18:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:15:58.221 18:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:58.221 18:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:58.221 18:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:58.221 18:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:58.221 1+0 records in 00:15:58.221 1+0 records out 00:15:58.221 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000500781 s, 8.2 MB/s 00:15:58.221 18:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:58.221 18:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:58.221 18:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:58.221 18:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:58.221 18:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:58.221 18:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:58.221 18:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:58.221 18:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:15:58.481 18:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:15:58.481 18:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:15:58.481 18:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:15:58.481 18:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:15:58.481 18:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:58.481 18:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:58.481 18:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:58.481 18:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:15:58.481 18:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:58.481 18:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:58.481 18:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:58.481 18:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:58.481 1+0 records in 00:15:58.481 1+0 records out 00:15:58.481 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000710095 s, 5.8 MB/s 00:15:58.481 18:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:58.481 18:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:58.481 18:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:58.481 18:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:58.481 18:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:58.481 18:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:58.481 18:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:58.481 18:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:15:58.739 18:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:15:58.739 18:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:15:58.739 18:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:15:58.739 18:25:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:15:58.739 18:25:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:58.739 18:25:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:58.739 18:25:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:58.739 18:25:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:15:58.739 18:25:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:58.739 18:25:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:58.739 18:25:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:58.739 18:25:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:58.739 1+0 records in 00:15:58.739 1+0 records out 00:15:58.739 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000774083 s, 5.3 MB/s 00:15:58.739 18:25:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:58.739 18:25:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:58.739 18:25:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:58.739 18:25:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:58.739 18:25:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:58.739 18:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:58.739 18:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:58.739 18:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:58.739 18:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:15:58.739 { 00:15:58.739 "nbd_device": "/dev/nbd0", 00:15:58.739 "bdev_name": "nvme0n1" 00:15:58.739 }, 00:15:58.739 { 00:15:58.739 "nbd_device": "/dev/nbd1", 00:15:58.739 "bdev_name": "nvme0n2" 00:15:58.739 }, 00:15:58.739 { 00:15:58.739 "nbd_device": "/dev/nbd2", 00:15:58.739 "bdev_name": "nvme0n3" 00:15:58.739 }, 00:15:58.739 { 00:15:58.739 "nbd_device": "/dev/nbd3", 00:15:58.739 "bdev_name": "nvme1n1" 00:15:58.739 }, 00:15:58.739 { 00:15:58.739 "nbd_device": "/dev/nbd4", 00:15:58.739 "bdev_name": "nvme2n1" 00:15:58.739 }, 00:15:58.739 { 00:15:58.739 "nbd_device": "/dev/nbd5", 00:15:58.739 "bdev_name": "nvme3n1" 00:15:58.739 } 00:15:58.739 ]' 00:15:58.739 18:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:15:58.739 18:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:15:58.739 { 00:15:58.739 "nbd_device": "/dev/nbd0", 00:15:58.739 "bdev_name": "nvme0n1" 00:15:58.739 }, 00:15:58.739 { 00:15:58.739 "nbd_device": "/dev/nbd1", 00:15:58.739 "bdev_name": "nvme0n2" 00:15:58.739 }, 00:15:58.739 { 00:15:58.739 "nbd_device": "/dev/nbd2", 00:15:58.739 "bdev_name": "nvme0n3" 00:15:58.739 }, 00:15:58.739 { 00:15:58.739 "nbd_device": "/dev/nbd3", 00:15:58.739 "bdev_name": "nvme1n1" 00:15:58.739 }, 00:15:58.739 { 00:15:58.739 "nbd_device": "/dev/nbd4", 00:15:58.739 "bdev_name": "nvme2n1" 00:15:58.739 }, 00:15:58.739 { 00:15:58.739 "nbd_device": "/dev/nbd5", 00:15:58.739 "bdev_name": "nvme3n1" 00:15:58.739 } 00:15:58.739 ]' 00:15:58.739 18:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:15:58.997 18:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:15:58.997 18:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:58.997 18:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:15:58.997 18:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:58.997 18:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:58.997 18:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:58.997 18:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:58.997 18:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:58.997 18:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:58.997 18:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:58.997 18:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:58.997 18:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:58.997 18:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:58.997 18:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:58.997 18:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:58.997 18:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:58.997 18:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:59.256 18:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:59.256 18:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:59.256 18:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:59.256 18:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:59.256 18:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:59.256 18:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:59.256 18:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:59.256 18:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:59.256 18:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:59.256 18:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:15:59.514 18:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:15:59.514 18:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:15:59.514 18:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:15:59.514 18:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:59.514 18:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:59.514 18:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:15:59.514 18:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:59.514 18:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:59.514 18:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:59.514 18:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:15:59.773 18:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:15:59.773 18:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:15:59.773 18:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:15:59.773 18:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:59.773 18:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:59.773 18:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:15:59.773 18:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:59.773 18:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:59.773 18:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:59.773 18:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:16:00.031 18:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:16:00.031 18:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:16:00.031 18:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:16:00.031 18:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:00.031 18:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:00.031 18:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:16:00.031 18:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:00.031 18:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:00.031 18:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:00.031 18:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:16:00.031 18:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:16:00.031 18:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:16:00.031 18:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:16:00.031 18:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:00.031 18:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:00.031 18:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:16:00.031 18:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:00.031 18:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:00.031 18:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:00.031 18:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:00.031 18:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:00.289 18:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:00.289 18:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:00.289 18:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:00.289 18:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:00.289 18:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:16:00.289 18:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:00.289 18:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:16:00.289 18:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:16:00.289 18:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:16:00.289 18:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:16:00.289 18:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:16:00.289 18:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:16:00.289 18:25:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:16:00.289 18:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:00.289 18:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:16:00.289 18:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:16:00.289 18:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:00.289 18:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:16:00.289 18:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:16:00.289 18:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:00.289 18:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:16:00.289 18:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:00.289 18:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:00.289 18:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:00.289 18:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:16:00.289 18:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:00.289 18:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:00.289 18:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:16:00.547 /dev/nbd0 00:16:00.547 18:25:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:00.547 18:25:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:00.547 18:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:00.547 18:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:00.547 18:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:00.547 18:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:00.547 18:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:00.547 18:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:00.547 18:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:00.547 18:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:00.547 18:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:00.547 1+0 records in 00:16:00.547 1+0 records out 00:16:00.547 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000994266 s, 4.1 MB/s 00:16:00.547 18:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:00.547 18:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:00.547 18:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:00.547 18:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:00.547 18:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:00.547 18:25:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:00.547 18:25:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:00.547 18:25:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:16:00.804 /dev/nbd1 00:16:00.804 18:25:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:00.804 18:25:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:00.804 18:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:00.804 18:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:00.804 18:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:00.804 18:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:00.804 18:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:00.804 18:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:00.804 18:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:00.804 18:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:00.804 18:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:00.804 1+0 records in 00:16:00.804 1+0 records out 00:16:00.804 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000674619 s, 6.1 MB/s 00:16:00.804 18:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:00.804 18:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:00.804 18:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:00.804 18:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:00.804 18:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:00.804 18:25:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:00.804 18:25:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:00.804 18:25:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:16:01.062 /dev/nbd10 00:16:01.062 18:25:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:16:01.062 18:25:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:16:01.062 18:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:16:01.062 18:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:01.062 18:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:01.062 18:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:01.062 18:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:16:01.062 18:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:01.062 18:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:01.062 18:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:01.062 18:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:01.062 1+0 records in 00:16:01.062 1+0 records out 00:16:01.062 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000367329 s, 11.2 MB/s 00:16:01.062 18:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:01.062 18:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:01.062 18:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:01.062 18:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:01.062 18:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:01.062 18:25:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:01.062 18:25:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:01.062 18:25:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:16:01.323 /dev/nbd11 00:16:01.323 18:25:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:16:01.323 18:25:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:16:01.323 18:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:16:01.323 18:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:01.323 18:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:01.323 18:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:01.323 18:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:16:01.323 18:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:01.323 18:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:01.323 18:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:01.323 18:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:01.323 1+0 records in 00:16:01.323 1+0 records out 00:16:01.323 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000997417 s, 4.1 MB/s 00:16:01.323 18:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:01.323 18:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:01.323 18:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:01.323 18:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:01.323 18:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:01.323 18:25:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:01.323 18:25:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:01.323 18:25:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:16:01.585 /dev/nbd12 00:16:01.585 18:25:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:16:01.585 18:25:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:16:01.585 18:25:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:16:01.585 18:25:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:01.585 18:25:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:01.585 18:25:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:01.585 18:25:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:16:01.585 18:25:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:01.585 18:25:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:01.585 18:25:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:01.585 18:25:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:01.585 1+0 records in 00:16:01.585 1+0 records out 00:16:01.585 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00141692 s, 2.9 MB/s 00:16:01.585 18:25:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:01.585 18:25:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:01.585 18:25:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:01.585 18:25:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:01.585 18:25:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:01.585 18:25:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:01.585 18:25:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:01.585 18:25:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:16:01.845 /dev/nbd13 00:16:01.845 18:25:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:16:01.845 18:25:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:16:01.845 18:25:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:16:01.845 18:25:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:01.845 18:25:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:01.845 18:25:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:01.845 18:25:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:16:01.845 18:25:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:01.845 18:25:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:01.845 18:25:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:01.845 18:25:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:01.845 1+0 records in 00:16:01.845 1+0 records out 00:16:01.845 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000913945 s, 4.5 MB/s 00:16:01.845 18:25:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:01.845 18:25:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:01.845 18:25:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:01.845 18:25:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:01.845 18:25:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:01.845 18:25:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:01.846 18:25:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:01.846 18:25:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:01.846 18:25:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:01.846 18:25:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:02.107 18:25:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:16:02.107 { 00:16:02.107 "nbd_device": "/dev/nbd0", 00:16:02.107 "bdev_name": "nvme0n1" 00:16:02.107 }, 00:16:02.107 { 00:16:02.107 "nbd_device": "/dev/nbd1", 00:16:02.107 "bdev_name": "nvme0n2" 00:16:02.107 }, 00:16:02.107 { 00:16:02.107 "nbd_device": "/dev/nbd10", 00:16:02.107 "bdev_name": "nvme0n3" 00:16:02.107 }, 00:16:02.107 { 00:16:02.107 "nbd_device": "/dev/nbd11", 00:16:02.107 "bdev_name": "nvme1n1" 00:16:02.107 }, 00:16:02.107 { 00:16:02.107 "nbd_device": "/dev/nbd12", 00:16:02.107 "bdev_name": "nvme2n1" 00:16:02.107 }, 00:16:02.107 { 00:16:02.107 "nbd_device": "/dev/nbd13", 00:16:02.107 "bdev_name": "nvme3n1" 00:16:02.107 } 00:16:02.107 ]' 00:16:02.107 18:25:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:16:02.107 { 00:16:02.107 "nbd_device": "/dev/nbd0", 00:16:02.107 "bdev_name": "nvme0n1" 00:16:02.107 }, 00:16:02.107 { 00:16:02.107 "nbd_device": "/dev/nbd1", 00:16:02.107 "bdev_name": "nvme0n2" 00:16:02.107 }, 00:16:02.107 { 00:16:02.107 "nbd_device": "/dev/nbd10", 00:16:02.107 "bdev_name": "nvme0n3" 00:16:02.107 }, 00:16:02.107 { 00:16:02.107 "nbd_device": "/dev/nbd11", 00:16:02.107 "bdev_name": "nvme1n1" 00:16:02.107 }, 00:16:02.107 { 00:16:02.107 "nbd_device": "/dev/nbd12", 00:16:02.107 "bdev_name": "nvme2n1" 00:16:02.107 }, 00:16:02.107 { 00:16:02.107 "nbd_device": "/dev/nbd13", 00:16:02.107 "bdev_name": "nvme3n1" 00:16:02.107 } 00:16:02.107 ]' 00:16:02.107 18:25:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:02.107 18:25:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:16:02.107 /dev/nbd1 00:16:02.107 /dev/nbd10 00:16:02.107 /dev/nbd11 00:16:02.107 /dev/nbd12 00:16:02.107 /dev/nbd13' 00:16:02.107 18:25:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:16:02.107 /dev/nbd1 00:16:02.107 /dev/nbd10 00:16:02.107 /dev/nbd11 00:16:02.107 /dev/nbd12 00:16:02.107 /dev/nbd13' 00:16:02.108 18:25:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:02.108 18:25:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:16:02.108 18:25:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:16:02.108 18:25:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:16:02.108 18:25:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:16:02.108 18:25:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:16:02.108 18:25:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:02.108 18:25:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:02.108 18:25:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:16:02.108 18:25:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:02.108 18:25:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:16:02.108 18:25:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:16:02.108 256+0 records in 00:16:02.108 256+0 records out 00:16:02.108 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00546354 s, 192 MB/s 00:16:02.108 18:25:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:02.108 18:25:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:16:02.369 256+0 records in 00:16:02.369 256+0 records out 00:16:02.369 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.240045 s, 4.4 MB/s 00:16:02.369 18:25:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:02.369 18:25:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:16:02.630 256+0 records in 00:16:02.630 256+0 records out 00:16:02.630 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.240779 s, 4.4 MB/s 00:16:02.630 18:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:02.630 18:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:16:02.891 256+0 records in 00:16:02.891 256+0 records out 00:16:02.891 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.231299 s, 4.5 MB/s 00:16:02.891 18:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:02.891 18:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:16:03.153 256+0 records in 00:16:03.153 256+0 records out 00:16:03.153 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.244921 s, 4.3 MB/s 00:16:03.153 18:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:03.153 18:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:16:03.415 256+0 records in 00:16:03.415 256+0 records out 00:16:03.415 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.317591 s, 3.3 MB/s 00:16:03.415 18:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:03.415 18:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:16:03.677 256+0 records in 00:16:03.677 256+0 records out 00:16:03.677 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.246992 s, 4.2 MB/s 00:16:03.677 18:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:16:03.677 18:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:03.677 18:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:03.677 18:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:16:03.677 18:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:03.677 18:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:16:03.677 18:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:16:03.677 18:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:03.677 18:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:16:03.677 18:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:03.677 18:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:16:03.677 18:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:03.677 18:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:16:03.677 18:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:03.677 18:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:16:03.677 18:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:03.677 18:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:16:03.677 18:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:03.677 18:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:16:03.677 18:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:03.677 18:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:16:03.677 18:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:03.677 18:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:03.677 18:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:03.677 18:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:03.677 18:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:03.677 18:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:03.938 18:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:03.938 18:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:03.938 18:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:03.938 18:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:03.938 18:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:03.938 18:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:03.938 18:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:03.938 18:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:03.938 18:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:03.938 18:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:16:04.229 18:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:04.229 18:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:04.229 18:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:04.229 18:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:04.229 18:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:04.229 18:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:04.229 18:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:04.229 18:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:04.229 18:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:04.229 18:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:16:04.530 18:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:16:04.530 18:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:16:04.530 18:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:16:04.530 18:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:04.530 18:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:04.530 18:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:16:04.530 18:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:04.530 18:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:04.530 18:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:04.530 18:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:16:04.530 18:25:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:16:04.530 18:25:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:16:04.530 18:25:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:16:04.530 18:25:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:04.530 18:25:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:04.530 18:25:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:16:04.530 18:25:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:04.530 18:25:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:04.530 18:25:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:04.530 18:25:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:16:04.796 18:25:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:16:04.796 18:25:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:16:04.796 18:25:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:16:04.796 18:25:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:04.796 18:25:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:04.797 18:25:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:16:04.797 18:25:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:04.797 18:25:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:04.797 18:25:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:04.797 18:25:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:16:05.058 18:25:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:16:05.058 18:25:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:16:05.058 18:25:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:16:05.058 18:25:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:05.058 18:25:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:05.058 18:25:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:16:05.058 18:25:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:05.058 18:25:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:05.058 18:25:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:05.058 18:25:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:05.058 18:25:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:05.317 18:25:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:05.317 18:25:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:05.318 18:25:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:05.318 18:25:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:05.318 18:25:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:16:05.318 18:25:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:05.318 18:25:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:16:05.318 18:25:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:16:05.318 18:25:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:16:05.318 18:25:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:16:05.318 18:25:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:16:05.318 18:25:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:16:05.318 18:25:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:16:05.318 18:25:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:05.318 18:25:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:16:05.318 18:25:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:16:05.576 malloc_lvol_verify 00:16:05.576 18:25:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:16:05.834 f09999c7-e4ed-48f0-927c-186acfddd09a 00:16:05.834 18:25:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:16:05.834 bb3b6a71-7919-4d6d-8618-af98ec7dc1a0 00:16:05.834 18:25:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:16:06.091 /dev/nbd0 00:16:06.091 18:25:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:16:06.092 18:25:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:16:06.092 18:25:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:16:06.092 18:25:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:16:06.092 18:25:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:16:06.092 mke2fs 1.47.0 (5-Feb-2023) 00:16:06.092 Discarding device blocks: 0/4096 done 00:16:06.092 Creating filesystem with 4096 1k blocks and 1024 inodes 00:16:06.092 00:16:06.092 Allocating group tables: 0/1 done 00:16:06.092 Writing inode tables: 0/1 done 00:16:06.092 Creating journal (1024 blocks): done 00:16:06.092 Writing superblocks and filesystem accounting information: 0/1 done 00:16:06.092 00:16:06.092 18:25:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:16:06.092 18:25:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:06.092 18:25:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:06.092 18:25:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:06.092 18:25:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:06.092 18:25:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:06.092 18:25:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:06.350 18:25:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:06.350 18:25:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:06.350 18:25:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:06.350 18:25:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:06.350 18:25:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:06.350 18:25:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:06.350 18:25:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:06.350 18:25:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:06.350 18:25:24 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 72296 00:16:06.350 18:25:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 72296 ']' 00:16:06.350 18:25:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 72296 00:16:06.350 18:25:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:16:06.350 18:25:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:06.350 18:25:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72296 00:16:06.350 killing process with pid 72296 00:16:06.350 18:25:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:06.350 18:25:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:06.350 18:25:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72296' 00:16:06.350 18:25:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 72296 00:16:06.350 18:25:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 72296 00:16:07.283 ************************************ 00:16:07.283 END TEST bdev_nbd 00:16:07.283 ************************************ 00:16:07.283 18:25:25 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:16:07.283 00:16:07.283 real 0m10.746s 00:16:07.283 user 0m14.507s 00:16:07.283 sys 0m3.650s 00:16:07.283 18:25:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:07.283 18:25:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:16:07.283 18:25:25 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:16:07.283 18:25:25 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:16:07.283 18:25:25 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:16:07.283 18:25:25 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:16:07.283 18:25:25 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:07.283 18:25:25 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:07.284 18:25:25 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:07.284 ************************************ 00:16:07.284 START TEST bdev_fio 00:16:07.284 ************************************ 00:16:07.284 18:25:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:16:07.284 18:25:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:16:07.284 18:25:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:16:07.284 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:16:07.284 18:25:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:16:07.284 18:25:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:16:07.284 18:25:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:16:07.284 18:25:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:16:07.284 18:25:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:16:07.284 18:25:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:07.284 18:25:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:16:07.284 18:25:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:16:07.284 18:25:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:16:07.284 18:25:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:16:07.284 18:25:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:16:07.284 18:25:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:16:07.284 18:25:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:16:07.284 18:25:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:07.284 18:25:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:16:07.284 18:25:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:16:07.284 18:25:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:16:07.284 18:25:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:16:07.284 18:25:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:16:07.284 18:25:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:16:07.284 18:25:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:16:07.284 18:25:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:07.284 18:25:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:16:07.284 18:25:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:16:07.284 18:25:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:07.284 18:25:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]' 00:16:07.284 18:25:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2 00:16:07.284 18:25:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:07.284 18:25:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]' 00:16:07.284 18:25:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3 00:16:07.284 18:25:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:07.284 18:25:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:16:07.284 18:25:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:16:07.284 18:25:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:07.284 18:25:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:16:07.284 18:25:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:16:07.284 18:25:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:07.284 18:25:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:16:07.284 18:25:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:16:07.284 18:25:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:16:07.284 18:25:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:07.284 18:25:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:16:07.284 18:25:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:07.284 18:25:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:16:07.284 ************************************ 00:16:07.284 START TEST bdev_fio_rw_verify 00:16:07.284 ************************************ 00:16:07.284 18:25:25 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:07.284 18:25:25 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:07.284 18:25:25 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:07.284 18:25:25 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:07.284 18:25:25 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:07.284 18:25:25 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:07.284 18:25:25 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:16:07.284 18:25:25 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:07.284 18:25:25 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:07.284 18:25:25 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:07.284 18:25:25 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:16:07.284 18:25:25 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:07.284 18:25:25 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:07.284 18:25:25 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:07.284 18:25:25 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:16:07.284 18:25:25 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:07.284 18:25:25 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:07.543 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:07.543 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:07.543 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:07.543 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:07.543 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:07.543 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:07.543 fio-3.35 00:16:07.543 Starting 6 threads 00:16:19.764 00:16:19.764 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=72700: Wed Nov 20 18:25:36 2024 00:16:19.764 read: IOPS=15.8k, BW=61.7MiB/s (64.7MB/s)(617MiB/10002msec) 00:16:19.764 slat (usec): min=2, max=3924, avg= 6.57, stdev=19.02 00:16:19.764 clat (usec): min=84, max=168983, avg=1216.66, stdev=1409.79 00:16:19.764 lat (usec): min=89, max=168987, avg=1223.23, stdev=1410.21 00:16:19.764 clat percentiles (usec): 00:16:19.764 | 50.000th=[ 1106], 99.000th=[ 3589], 99.900th=[ 4948], 00:16:19.764 | 99.990th=[ 7373], 99.999th=[168821] 00:16:19.764 write: IOPS=16.2k, BW=63.1MiB/s (66.2MB/s)(631MiB/10002msec); 0 zone resets 00:16:19.764 slat (usec): min=12, max=4587, avg=40.43, stdev=135.10 00:16:19.764 clat (usec): min=86, max=7680, avg=1458.75, stdev=805.84 00:16:19.764 lat (usec): min=100, max=7697, avg=1499.18, stdev=818.46 00:16:19.764 clat percentiles (usec): 00:16:19.764 | 50.000th=[ 1336], 99.000th=[ 3949], 99.900th=[ 5407], 99.990th=[ 6456], 00:16:19.764 | 99.999th=[ 7635] 00:16:19.764 bw ( KiB/s): min=47501, max=104830, per=100.00%, avg=65141.42, stdev=2565.62, samples=114 00:16:19.764 iops : min=11871, max=26205, avg=16283.95, stdev=641.44, samples=114 00:16:19.764 lat (usec) : 100=0.01%, 250=3.47%, 500=9.59%, 750=11.63%, 1000=13.17% 00:16:19.764 lat (msec) : 2=44.64%, 4=16.78%, 10=0.71%, 250=0.01% 00:16:19.764 cpu : usr=41.47%, sys=33.69%, ctx=5755, majf=0, minf=15658 00:16:19.764 IO depths : 1=11.2%, 2=23.6%, 4=51.3%, 8=13.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:19.764 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:19.764 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:19.764 issued rwts: total=158013,161604,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:19.764 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:19.764 00:16:19.764 Run status group 0 (all jobs): 00:16:19.764 READ: bw=61.7MiB/s (64.7MB/s), 61.7MiB/s-61.7MiB/s (64.7MB/s-64.7MB/s), io=617MiB (647MB), run=10002-10002msec 00:16:19.764 WRITE: bw=63.1MiB/s (66.2MB/s), 63.1MiB/s-63.1MiB/s (66.2MB/s-66.2MB/s), io=631MiB (662MB), run=10002-10002msec 00:16:19.764 ----------------------------------------------------- 00:16:19.764 Suppressions used: 00:16:19.764 count bytes template 00:16:19.764 6 48 /usr/src/fio/parse.c 00:16:19.764 3489 334944 /usr/src/fio/iolog.c 00:16:19.764 1 8 libtcmalloc_minimal.so 00:16:19.764 1 904 libcrypto.so 00:16:19.764 ----------------------------------------------------- 00:16:19.764 00:16:19.764 00:16:19.764 real 0m11.928s 00:16:19.764 user 0m26.421s 00:16:19.764 sys 0m20.501s 00:16:19.764 18:25:37 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:19.764 ************************************ 00:16:19.764 END TEST bdev_fio_rw_verify 00:16:19.764 ************************************ 00:16:19.764 18:25:37 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:16:19.764 18:25:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:16:19.764 18:25:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:19.764 18:25:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:16:19.764 18:25:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:19.764 18:25:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:16:19.764 18:25:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:16:19.764 18:25:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:16:19.764 18:25:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:16:19.764 18:25:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:16:19.764 18:25:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:16:19.764 18:25:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:16:19.764 18:25:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:19.764 18:25:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:16:19.764 18:25:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:16:19.764 18:25:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:16:19.764 18:25:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:16:19.764 18:25:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:16:19.764 18:25:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "95630b87-c255-4228-849a-8ed90c28f973"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "95630b87-c255-4228-849a-8ed90c28f973",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "5927abee-5957-4bfc-ba0c-b2cffe5e16ae"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "5927abee-5957-4bfc-ba0c-b2cffe5e16ae",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "1b21a195-f412-40d6-af80-f9d608ecfa98"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "1b21a195-f412-40d6-af80-f9d608ecfa98",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "2e86029f-ebe0-4811-b91b-1d2195ff81cd"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "2e86029f-ebe0-4811-b91b-1d2195ff81cd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "857f1bb3-5409-421c-af89-385a0044e2e1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "857f1bb3-5409-421c-af89-385a0044e2e1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "5586cd95-5775-4567-91d9-c3addfa1edef"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "5586cd95-5775-4567-91d9-c3addfa1edef",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:16:19.765 18:25:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:16:19.765 18:25:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:19.765 /home/vagrant/spdk_repo/spdk 00:16:19.765 18:25:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:16:19.765 18:25:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:16:19.765 18:25:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:16:19.765 00:16:19.765 real 0m12.104s 00:16:19.765 user 0m26.503s 00:16:19.765 sys 0m20.573s 00:16:19.765 18:25:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:19.765 ************************************ 00:16:19.765 END TEST bdev_fio 00:16:19.765 ************************************ 00:16:19.765 18:25:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:16:19.765 18:25:37 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:19.765 18:25:37 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:16:19.765 18:25:37 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:16:19.765 18:25:37 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:19.765 18:25:37 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:19.765 ************************************ 00:16:19.765 START TEST bdev_verify 00:16:19.765 ************************************ 00:16:19.765 18:25:37 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:16:19.765 [2024-11-20 18:25:37.904333] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:16:19.765 [2024-11-20 18:25:37.904491] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72873 ] 00:16:19.765 [2024-11-20 18:25:38.067790] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:19.765 [2024-11-20 18:25:38.189799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:19.765 [2024-11-20 18:25:38.189921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:20.026 Running I/O for 5 seconds... 00:16:22.358 22816.00 IOPS, 89.12 MiB/s [2024-11-20T18:25:41.930Z] 22896.00 IOPS, 89.44 MiB/s [2024-11-20T18:25:42.874Z] 23253.33 IOPS, 90.83 MiB/s [2024-11-20T18:25:43.815Z] 23512.00 IOPS, 91.84 MiB/s [2024-11-20T18:25:43.815Z] 23724.80 IOPS, 92.67 MiB/s 00:16:25.186 Latency(us) 00:16:25.186 [2024-11-20T18:25:43.815Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:25.186 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:25.186 Verification LBA range: start 0x0 length 0x80000 00:16:25.186 nvme0n1 : 5.05 1723.37 6.73 0.00 0.00 74121.65 7007.31 79449.80 00:16:25.186 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:25.186 Verification LBA range: start 0x80000 length 0x80000 00:16:25.186 nvme0n1 : 5.04 2157.34 8.43 0.00 0.00 59228.19 6452.78 64124.46 00:16:25.186 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:25.186 Verification LBA range: start 0x0 length 0x80000 00:16:25.186 nvme0n2 : 5.05 1697.09 6.63 0.00 0.00 75090.29 8771.74 76626.71 00:16:25.186 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:25.186 Verification LBA range: start 0x80000 length 0x80000 00:16:25.186 nvme0n2 : 5.06 2097.70 8.19 0.00 0.00 60812.94 4965.61 61301.37 00:16:25.186 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:25.186 Verification LBA range: start 0x0 length 0x80000 00:16:25.186 nvme0n3 : 5.03 1703.81 6.66 0.00 0.00 74624.52 9779.99 80256.39 00:16:25.186 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:25.186 Verification LBA range: start 0x80000 length 0x80000 00:16:25.186 nvme0n3 : 5.05 2104.99 8.22 0.00 0.00 60501.88 5772.21 62107.96 00:16:25.186 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:25.186 Verification LBA range: start 0x0 length 0x20000 00:16:25.186 nvme1n1 : 5.07 1715.43 6.70 0.00 0.00 73963.93 8922.98 70173.93 00:16:25.186 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:25.186 Verification LBA range: start 0x20000 length 0x20000 00:16:25.186 nvme1n1 : 5.06 2098.82 8.20 0.00 0.00 60579.16 9679.16 57268.38 00:16:25.186 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:25.186 Verification LBA range: start 0x0 length 0xbd0bd 00:16:25.186 nvme2n1 : 5.08 2438.43 9.53 0.00 0.00 51803.25 6099.89 60494.77 00:16:25.186 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:25.186 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:16:25.187 nvme2n1 : 5.05 2684.99 10.49 0.00 0.00 47259.91 5595.77 62511.26 00:16:25.187 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:25.187 Verification LBA range: start 0x0 length 0xa0000 00:16:25.187 nvme3n1 : 5.08 1536.92 6.00 0.00 0.00 82031.70 4461.49 98808.12 00:16:25.187 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:25.187 Verification LBA range: start 0xa0000 length 0xa0000 00:16:25.187 nvme3n1 : 5.07 1515.52 5.92 0.00 0.00 83465.57 5671.38 108083.99 00:16:25.187 [2024-11-20T18:25:43.816Z] =================================================================================================================== 00:16:25.187 [2024-11-20T18:25:43.816Z] Total : 23474.40 91.70 0.00 0.00 64958.98 4461.49 108083.99 00:16:26.130 00:16:26.130 real 0m6.730s 00:16:26.130 user 0m10.809s 00:16:26.130 sys 0m1.500s 00:16:26.130 18:25:44 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:26.130 ************************************ 00:16:26.130 END TEST bdev_verify 00:16:26.130 ************************************ 00:16:26.130 18:25:44 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:16:26.130 18:25:44 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:26.130 18:25:44 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:16:26.130 18:25:44 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:26.130 18:25:44 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:26.130 ************************************ 00:16:26.130 START TEST bdev_verify_big_io 00:16:26.130 ************************************ 00:16:26.130 18:25:44 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:26.130 [2024-11-20 18:25:44.710562] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:16:26.130 [2024-11-20 18:25:44.710707] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72975 ] 00:16:26.392 [2024-11-20 18:25:44.876049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:26.392 [2024-11-20 18:25:44.998533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:26.392 [2024-11-20 18:25:44.998648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:26.966 Running I/O for 5 seconds... 00:16:33.204 2758.00 IOPS, 172.38 MiB/s [2024-11-20T18:25:52.095Z] 3468.50 IOPS, 216.78 MiB/s [2024-11-20T18:25:52.095Z] 3037.67 IOPS, 189.85 MiB/s 00:16:33.466 Latency(us) 00:16:33.466 [2024-11-20T18:25:52.095Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:33.466 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:33.466 Verification LBA range: start 0x0 length 0x8000 00:16:33.466 nvme0n1 : 6.02 47.88 2.99 0.00 0.00 2543502.57 111310.38 2877937.82 00:16:33.466 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:33.466 Verification LBA range: start 0x8000 length 0x8000 00:16:33.466 nvme0n1 : 5.53 124.49 7.78 0.00 0.00 987204.58 42951.29 896935.78 00:16:33.466 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:33.466 Verification LBA range: start 0x0 length 0x8000 00:16:33.466 nvme0n2 : 6.02 85.08 5.32 0.00 0.00 1351124.87 23996.26 1406705.03 00:16:33.466 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:33.466 Verification LBA range: start 0x8000 length 0x8000 00:16:33.466 nvme0n2 : 5.86 139.43 8.71 0.00 0.00 849729.87 5142.06 942105.21 00:16:33.466 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:33.466 Verification LBA range: start 0x0 length 0x8000 00:16:33.466 nvme0n3 : 5.98 86.90 5.43 0.00 0.00 1245376.34 95178.44 1174405.12 00:16:33.466 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:33.466 Verification LBA range: start 0x8000 length 0x8000 00:16:33.466 nvme0n3 : 5.87 141.70 8.86 0.00 0.00 838261.18 50009.01 858219.13 00:16:33.466 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:33.466 Verification LBA range: start 0x0 length 0x2000 00:16:33.466 nvme1n1 : 6.11 100.90 6.31 0.00 0.00 1037049.09 43152.94 1187310.67 00:16:33.466 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:33.466 Verification LBA range: start 0x2000 length 0x2000 00:16:33.466 nvme1n1 : 5.88 141.48 8.84 0.00 0.00 822632.28 14417.92 1580929.97 00:16:33.466 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:33.466 Verification LBA range: start 0x0 length 0xbd0b 00:16:33.466 nvme2n1 : 6.31 169.84 10.61 0.00 0.00 589729.64 6200.71 1606741.07 00:16:33.466 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:33.466 Verification LBA range: start 0xbd0b length 0xbd0b 00:16:33.466 nvme2n1 : 5.87 136.18 8.51 0.00 0.00 822390.34 88322.36 1651910.50 00:16:33.466 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:33.466 Verification LBA range: start 0x0 length 0xa000 00:16:33.466 nvme3n1 : 6.51 216.21 13.51 0.00 0.00 442597.57 756.18 1703532.70 00:16:33.466 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:33.466 Verification LBA range: start 0xa000 length 0xa000 00:16:33.466 nvme3n1 : 5.88 153.65 9.60 0.00 0.00 715100.15 6604.01 1129235.69 00:16:33.466 [2024-11-20T18:25:52.095Z] =================================================================================================================== 00:16:33.466 [2024-11-20T18:25:52.095Z] Total : 1543.73 96.48 0.00 0.00 864004.99 756.18 2877937.82 00:16:34.410 00:16:34.410 real 0m8.372s 00:16:34.410 user 0m15.389s 00:16:34.410 sys 0m0.491s 00:16:34.410 18:25:53 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:34.410 18:25:53 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:16:34.410 ************************************ 00:16:34.410 END TEST bdev_verify_big_io 00:16:34.410 ************************************ 00:16:34.671 18:25:53 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:34.671 18:25:53 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:34.671 18:25:53 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:34.671 18:25:53 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:34.671 ************************************ 00:16:34.671 START TEST bdev_write_zeroes 00:16:34.671 ************************************ 00:16:34.671 18:25:53 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:34.671 [2024-11-20 18:25:53.133650] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:16:34.671 [2024-11-20 18:25:53.133746] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73085 ] 00:16:34.671 [2024-11-20 18:25:53.286107] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:34.932 [2024-11-20 18:25:53.376590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:35.193 Running I/O for 1 seconds... 00:16:36.138 81920.00 IOPS, 320.00 MiB/s 00:16:36.138 Latency(us) 00:16:36.138 [2024-11-20T18:25:54.767Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:36.138 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:36.138 nvme0n1 : 1.01 13262.09 51.81 0.00 0.00 9643.17 6351.95 18854.20 00:16:36.138 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:36.138 nvme0n2 : 1.01 13250.03 51.76 0.00 0.00 9645.72 6377.16 18652.55 00:16:36.138 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:36.138 nvme0n3 : 1.02 13238.88 51.71 0.00 0.00 9647.98 6377.16 18450.90 00:16:36.138 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:36.138 nvme1n1 : 1.02 13227.44 51.67 0.00 0.00 9650.47 6402.36 18249.26 00:16:36.138 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:36.138 nvme2n1 : 1.02 15285.33 59.71 0.00 0.00 8345.30 3188.58 17946.78 00:16:36.138 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:36.138 nvme3n1 : 1.02 13143.91 51.34 0.00 0.00 9642.17 2835.69 23592.96 00:16:36.138 [2024-11-20T18:25:54.767Z] =================================================================================================================== 00:16:36.138 [2024-11-20T18:25:54.767Z] Total : 81407.67 318.00 0.00 0.00 9400.66 2835.69 23592.96 00:16:37.083 00:16:37.083 real 0m2.274s 00:16:37.083 user 0m1.621s 00:16:37.083 sys 0m0.486s 00:16:37.083 18:25:55 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:37.083 ************************************ 00:16:37.083 END TEST bdev_write_zeroes 00:16:37.083 18:25:55 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:16:37.083 ************************************ 00:16:37.083 18:25:55 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:37.083 18:25:55 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:37.083 18:25:55 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:37.083 18:25:55 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:37.083 ************************************ 00:16:37.083 START TEST bdev_json_nonenclosed 00:16:37.083 ************************************ 00:16:37.083 18:25:55 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:37.083 [2024-11-20 18:25:55.471969] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:16:37.083 [2024-11-20 18:25:55.472117] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73130 ] 00:16:37.083 [2024-11-20 18:25:55.634434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:37.345 [2024-11-20 18:25:55.745631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:37.345 [2024-11-20 18:25:55.745698] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:16:37.345 [2024-11-20 18:25:55.745714] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:37.345 [2024-11-20 18:25:55.745722] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:37.345 00:16:37.345 real 0m0.493s 00:16:37.345 user 0m0.270s 00:16:37.345 sys 0m0.118s 00:16:37.345 18:25:55 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:37.345 ************************************ 00:16:37.345 END TEST bdev_json_nonenclosed 00:16:37.345 ************************************ 00:16:37.345 18:25:55 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:16:37.345 18:25:55 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:37.345 18:25:55 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:37.345 18:25:55 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:37.345 18:25:55 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:37.345 ************************************ 00:16:37.345 START TEST bdev_json_nonarray 00:16:37.345 ************************************ 00:16:37.345 18:25:55 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:37.606 [2024-11-20 18:25:56.010921] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:16:37.606 [2024-11-20 18:25:56.011018] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73161 ] 00:16:37.606 [2024-11-20 18:25:56.158748] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:37.867 [2024-11-20 18:25:56.248244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:37.867 [2024-11-20 18:25:56.248330] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:16:37.867 [2024-11-20 18:25:56.248346] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:37.867 [2024-11-20 18:25:56.248354] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:37.867 00:16:37.867 real 0m0.435s 00:16:37.867 user 0m0.250s 00:16:37.867 sys 0m0.082s 00:16:37.867 18:25:56 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:37.867 18:25:56 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:16:37.867 ************************************ 00:16:37.867 END TEST bdev_json_nonarray 00:16:37.867 ************************************ 00:16:37.868 18:25:56 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:16:37.868 18:25:56 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:16:37.868 18:25:56 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:16:37.868 18:25:56 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:16:37.868 18:25:56 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:16:37.868 18:25:56 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:16:37.868 18:25:56 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:37.868 18:25:56 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:16:37.868 18:25:56 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:16:37.868 18:25:56 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:16:37.868 18:25:56 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:16:37.868 18:25:56 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:38.440 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:45.031 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:45.031 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:45.975 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:16:45.976 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:16:45.976 00:16:45.976 real 0m57.731s 00:16:45.976 user 1m21.226s 00:16:45.976 sys 0m46.141s 00:16:45.976 18:26:04 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:45.976 18:26:04 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:45.976 ************************************ 00:16:45.976 END TEST blockdev_xnvme 00:16:45.976 ************************************ 00:16:45.976 18:26:04 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:16:45.976 18:26:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:45.976 18:26:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:45.976 18:26:04 -- common/autotest_common.sh@10 -- # set +x 00:16:45.976 ************************************ 00:16:45.976 START TEST ublk 00:16:45.976 ************************************ 00:16:45.976 18:26:04 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:16:45.976 * Looking for test storage... 00:16:45.976 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:16:45.976 18:26:04 ublk -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:45.976 18:26:04 ublk -- common/autotest_common.sh@1693 -- # lcov --version 00:16:45.976 18:26:04 ublk -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:46.237 18:26:04 ublk -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:46.237 18:26:04 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:46.237 18:26:04 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:46.237 18:26:04 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:46.237 18:26:04 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:16:46.237 18:26:04 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:16:46.237 18:26:04 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:16:46.237 18:26:04 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:16:46.237 18:26:04 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:16:46.237 18:26:04 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:16:46.237 18:26:04 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:16:46.237 18:26:04 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:46.237 18:26:04 ublk -- scripts/common.sh@344 -- # case "$op" in 00:16:46.237 18:26:04 ublk -- scripts/common.sh@345 -- # : 1 00:16:46.237 18:26:04 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:46.237 18:26:04 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:46.237 18:26:04 ublk -- scripts/common.sh@365 -- # decimal 1 00:16:46.237 18:26:04 ublk -- scripts/common.sh@353 -- # local d=1 00:16:46.237 18:26:04 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:46.237 18:26:04 ublk -- scripts/common.sh@355 -- # echo 1 00:16:46.237 18:26:04 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:16:46.237 18:26:04 ublk -- scripts/common.sh@366 -- # decimal 2 00:16:46.237 18:26:04 ublk -- scripts/common.sh@353 -- # local d=2 00:16:46.237 18:26:04 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:46.237 18:26:04 ublk -- scripts/common.sh@355 -- # echo 2 00:16:46.237 18:26:04 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:16:46.237 18:26:04 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:46.237 18:26:04 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:46.238 18:26:04 ublk -- scripts/common.sh@368 -- # return 0 00:16:46.238 18:26:04 ublk -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:46.238 18:26:04 ublk -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:46.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:46.238 --rc genhtml_branch_coverage=1 00:16:46.238 --rc genhtml_function_coverage=1 00:16:46.238 --rc genhtml_legend=1 00:16:46.238 --rc geninfo_all_blocks=1 00:16:46.238 --rc geninfo_unexecuted_blocks=1 00:16:46.238 00:16:46.238 ' 00:16:46.238 18:26:04 ublk -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:46.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:46.238 --rc genhtml_branch_coverage=1 00:16:46.238 --rc genhtml_function_coverage=1 00:16:46.238 --rc genhtml_legend=1 00:16:46.238 --rc geninfo_all_blocks=1 00:16:46.238 --rc geninfo_unexecuted_blocks=1 00:16:46.238 00:16:46.238 ' 00:16:46.238 18:26:04 ublk -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:46.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:46.238 --rc genhtml_branch_coverage=1 00:16:46.238 --rc genhtml_function_coverage=1 00:16:46.238 --rc genhtml_legend=1 00:16:46.238 --rc geninfo_all_blocks=1 00:16:46.238 --rc geninfo_unexecuted_blocks=1 00:16:46.238 00:16:46.238 ' 00:16:46.238 18:26:04 ublk -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:46.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:46.238 --rc genhtml_branch_coverage=1 00:16:46.238 --rc genhtml_function_coverage=1 00:16:46.238 --rc genhtml_legend=1 00:16:46.238 --rc geninfo_all_blocks=1 00:16:46.238 --rc geninfo_unexecuted_blocks=1 00:16:46.238 00:16:46.238 ' 00:16:46.238 18:26:04 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:16:46.238 18:26:04 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:16:46.238 18:26:04 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:16:46.238 18:26:04 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:16:46.238 18:26:04 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:16:46.238 18:26:04 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:16:46.238 18:26:04 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:16:46.238 18:26:04 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:16:46.238 18:26:04 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:16:46.238 18:26:04 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:16:46.238 18:26:04 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:16:46.238 18:26:04 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:16:46.238 18:26:04 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:16:46.238 18:26:04 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:16:46.238 18:26:04 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:16:46.238 18:26:04 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:16:46.238 18:26:04 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:16:46.238 18:26:04 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:16:46.238 18:26:04 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:16:46.238 18:26:04 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:16:46.238 18:26:04 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:46.238 18:26:04 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:46.238 18:26:04 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:46.238 ************************************ 00:16:46.238 START TEST test_save_ublk_config 00:16:46.238 ************************************ 00:16:46.238 18:26:04 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:16:46.238 18:26:04 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:16:46.238 18:26:04 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=73456 00:16:46.238 18:26:04 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:16:46.238 18:26:04 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:16:46.238 18:26:04 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 73456 00:16:46.238 18:26:04 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 73456 ']' 00:16:46.238 18:26:04 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:46.238 18:26:04 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:46.238 18:26:04 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:46.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:46.238 18:26:04 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:46.238 18:26:04 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:46.238 [2024-11-20 18:26:04.776367] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:16:46.238 [2024-11-20 18:26:04.777010] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73456 ] 00:16:46.499 [2024-11-20 18:26:04.940077] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:46.499 [2024-11-20 18:26:05.060789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:47.457 18:26:05 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:47.457 18:26:05 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:16:47.457 18:26:05 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:16:47.457 18:26:05 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:16:47.457 18:26:05 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.457 18:26:05 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:47.457 [2024-11-20 18:26:05.861126] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:47.457 [2024-11-20 18:26:05.862177] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:47.457 malloc0 00:16:47.457 [2024-11-20 18:26:05.940285] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:16:47.457 [2024-11-20 18:26:05.940409] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:16:47.457 [2024-11-20 18:26:05.940422] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:47.457 [2024-11-20 18:26:05.940432] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:47.457 [2024-11-20 18:26:05.949277] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:47.457 [2024-11-20 18:26:05.949314] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:47.457 [2024-11-20 18:26:05.950777] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:47.457 [2024-11-20 18:26:05.950912] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:47.457 [2024-11-20 18:26:05.964205] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:47.457 0 00:16:47.457 18:26:05 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.457 18:26:05 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:16:47.457 18:26:05 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.457 18:26:05 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:47.719 18:26:06 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.719 18:26:06 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:16:47.719 "subsystems": [ 00:16:47.719 { 00:16:47.719 "subsystem": "fsdev", 00:16:47.719 "config": [ 00:16:47.719 { 00:16:47.719 "method": "fsdev_set_opts", 00:16:47.719 "params": { 00:16:47.719 "fsdev_io_pool_size": 65535, 00:16:47.719 "fsdev_io_cache_size": 256 00:16:47.719 } 00:16:47.719 } 00:16:47.719 ] 00:16:47.719 }, 00:16:47.719 { 00:16:47.719 "subsystem": "keyring", 00:16:47.719 "config": [] 00:16:47.719 }, 00:16:47.719 { 00:16:47.719 "subsystem": "iobuf", 00:16:47.719 "config": [ 00:16:47.719 { 00:16:47.719 "method": "iobuf_set_options", 00:16:47.719 "params": { 00:16:47.719 "small_pool_count": 8192, 00:16:47.719 "large_pool_count": 1024, 00:16:47.719 "small_bufsize": 8192, 00:16:47.719 "large_bufsize": 135168, 00:16:47.719 "enable_numa": false 00:16:47.719 } 00:16:47.719 } 00:16:47.719 ] 00:16:47.719 }, 00:16:47.719 { 00:16:47.719 "subsystem": "sock", 00:16:47.719 "config": [ 00:16:47.719 { 00:16:47.719 "method": "sock_set_default_impl", 00:16:47.719 "params": { 00:16:47.719 "impl_name": "posix" 00:16:47.719 } 00:16:47.719 }, 00:16:47.719 { 00:16:47.719 "method": "sock_impl_set_options", 00:16:47.719 "params": { 00:16:47.719 "impl_name": "ssl", 00:16:47.719 "recv_buf_size": 4096, 00:16:47.719 "send_buf_size": 4096, 00:16:47.719 "enable_recv_pipe": true, 00:16:47.719 "enable_quickack": false, 00:16:47.719 "enable_placement_id": 0, 00:16:47.719 "enable_zerocopy_send_server": true, 00:16:47.719 "enable_zerocopy_send_client": false, 00:16:47.719 "zerocopy_threshold": 0, 00:16:47.719 "tls_version": 0, 00:16:47.719 "enable_ktls": false 00:16:47.719 } 00:16:47.719 }, 00:16:47.719 { 00:16:47.719 "method": "sock_impl_set_options", 00:16:47.719 "params": { 00:16:47.719 "impl_name": "posix", 00:16:47.719 "recv_buf_size": 2097152, 00:16:47.719 "send_buf_size": 2097152, 00:16:47.719 "enable_recv_pipe": true, 00:16:47.719 "enable_quickack": false, 00:16:47.719 "enable_placement_id": 0, 00:16:47.719 "enable_zerocopy_send_server": true, 00:16:47.719 "enable_zerocopy_send_client": false, 00:16:47.719 "zerocopy_threshold": 0, 00:16:47.719 "tls_version": 0, 00:16:47.719 "enable_ktls": false 00:16:47.719 } 00:16:47.719 } 00:16:47.719 ] 00:16:47.719 }, 00:16:47.719 { 00:16:47.719 "subsystem": "vmd", 00:16:47.719 "config": [] 00:16:47.719 }, 00:16:47.719 { 00:16:47.719 "subsystem": "accel", 00:16:47.719 "config": [ 00:16:47.719 { 00:16:47.719 "method": "accel_set_options", 00:16:47.719 "params": { 00:16:47.719 "small_cache_size": 128, 00:16:47.719 "large_cache_size": 16, 00:16:47.719 "task_count": 2048, 00:16:47.719 "sequence_count": 2048, 00:16:47.719 "buf_count": 2048 00:16:47.719 } 00:16:47.719 } 00:16:47.719 ] 00:16:47.719 }, 00:16:47.719 { 00:16:47.719 "subsystem": "bdev", 00:16:47.719 "config": [ 00:16:47.719 { 00:16:47.719 "method": "bdev_set_options", 00:16:47.719 "params": { 00:16:47.719 "bdev_io_pool_size": 65535, 00:16:47.719 "bdev_io_cache_size": 256, 00:16:47.719 "bdev_auto_examine": true, 00:16:47.719 "iobuf_small_cache_size": 128, 00:16:47.719 "iobuf_large_cache_size": 16 00:16:47.719 } 00:16:47.719 }, 00:16:47.719 { 00:16:47.719 "method": "bdev_raid_set_options", 00:16:47.719 "params": { 00:16:47.719 "process_window_size_kb": 1024, 00:16:47.719 "process_max_bandwidth_mb_sec": 0 00:16:47.719 } 00:16:47.719 }, 00:16:47.719 { 00:16:47.719 "method": "bdev_iscsi_set_options", 00:16:47.719 "params": { 00:16:47.719 "timeout_sec": 30 00:16:47.719 } 00:16:47.719 }, 00:16:47.719 { 00:16:47.719 "method": "bdev_nvme_set_options", 00:16:47.719 "params": { 00:16:47.719 "action_on_timeout": "none", 00:16:47.719 "timeout_us": 0, 00:16:47.719 "timeout_admin_us": 0, 00:16:47.719 "keep_alive_timeout_ms": 10000, 00:16:47.719 "arbitration_burst": 0, 00:16:47.719 "low_priority_weight": 0, 00:16:47.719 "medium_priority_weight": 0, 00:16:47.719 "high_priority_weight": 0, 00:16:47.719 "nvme_adminq_poll_period_us": 10000, 00:16:47.719 "nvme_ioq_poll_period_us": 0, 00:16:47.719 "io_queue_requests": 0, 00:16:47.719 "delay_cmd_submit": true, 00:16:47.719 "transport_retry_count": 4, 00:16:47.719 "bdev_retry_count": 3, 00:16:47.719 "transport_ack_timeout": 0, 00:16:47.719 "ctrlr_loss_timeout_sec": 0, 00:16:47.719 "reconnect_delay_sec": 0, 00:16:47.719 "fast_io_fail_timeout_sec": 0, 00:16:47.719 "disable_auto_failback": false, 00:16:47.719 "generate_uuids": false, 00:16:47.719 "transport_tos": 0, 00:16:47.719 "nvme_error_stat": false, 00:16:47.719 "rdma_srq_size": 0, 00:16:47.719 "io_path_stat": false, 00:16:47.719 "allow_accel_sequence": false, 00:16:47.719 "rdma_max_cq_size": 0, 00:16:47.719 "rdma_cm_event_timeout_ms": 0, 00:16:47.719 "dhchap_digests": [ 00:16:47.719 "sha256", 00:16:47.719 "sha384", 00:16:47.719 "sha512" 00:16:47.719 ], 00:16:47.719 "dhchap_dhgroups": [ 00:16:47.719 "null", 00:16:47.719 "ffdhe2048", 00:16:47.719 "ffdhe3072", 00:16:47.719 "ffdhe4096", 00:16:47.719 "ffdhe6144", 00:16:47.720 "ffdhe8192" 00:16:47.720 ] 00:16:47.720 } 00:16:47.720 }, 00:16:47.720 { 00:16:47.720 "method": "bdev_nvme_set_hotplug", 00:16:47.720 "params": { 00:16:47.720 "period_us": 100000, 00:16:47.720 "enable": false 00:16:47.720 } 00:16:47.720 }, 00:16:47.720 { 00:16:47.720 "method": "bdev_malloc_create", 00:16:47.720 "params": { 00:16:47.720 "name": "malloc0", 00:16:47.720 "num_blocks": 8192, 00:16:47.720 "block_size": 4096, 00:16:47.720 "physical_block_size": 4096, 00:16:47.720 "uuid": "6148ea22-84e9-459a-9d54-e14a83fba276", 00:16:47.720 "optimal_io_boundary": 0, 00:16:47.720 "md_size": 0, 00:16:47.720 "dif_type": 0, 00:16:47.720 "dif_is_head_of_md": false, 00:16:47.720 "dif_pi_format": 0 00:16:47.720 } 00:16:47.720 }, 00:16:47.720 { 00:16:47.720 "method": "bdev_wait_for_examine" 00:16:47.720 } 00:16:47.720 ] 00:16:47.720 }, 00:16:47.720 { 00:16:47.720 "subsystem": "scsi", 00:16:47.720 "config": null 00:16:47.720 }, 00:16:47.720 { 00:16:47.720 "subsystem": "scheduler", 00:16:47.720 "config": [ 00:16:47.720 { 00:16:47.720 "method": "framework_set_scheduler", 00:16:47.720 "params": { 00:16:47.720 "name": "static" 00:16:47.720 } 00:16:47.720 } 00:16:47.720 ] 00:16:47.720 }, 00:16:47.720 { 00:16:47.720 "subsystem": "vhost_scsi", 00:16:47.720 "config": [] 00:16:47.720 }, 00:16:47.720 { 00:16:47.720 "subsystem": "vhost_blk", 00:16:47.720 "config": [] 00:16:47.720 }, 00:16:47.720 { 00:16:47.720 "subsystem": "ublk", 00:16:47.720 "config": [ 00:16:47.720 { 00:16:47.720 "method": "ublk_create_target", 00:16:47.720 "params": { 00:16:47.720 "cpumask": "1" 00:16:47.720 } 00:16:47.720 }, 00:16:47.720 { 00:16:47.720 "method": "ublk_start_disk", 00:16:47.720 "params": { 00:16:47.720 "bdev_name": "malloc0", 00:16:47.720 "ublk_id": 0, 00:16:47.720 "num_queues": 1, 00:16:47.720 "queue_depth": 128 00:16:47.720 } 00:16:47.720 } 00:16:47.720 ] 00:16:47.720 }, 00:16:47.720 { 00:16:47.720 "subsystem": "nbd", 00:16:47.720 "config": [] 00:16:47.720 }, 00:16:47.720 { 00:16:47.720 "subsystem": "nvmf", 00:16:47.720 "config": [ 00:16:47.720 { 00:16:47.720 "method": "nvmf_set_config", 00:16:47.720 "params": { 00:16:47.720 "discovery_filter": "match_any", 00:16:47.720 "admin_cmd_passthru": { 00:16:47.720 "identify_ctrlr": false 00:16:47.720 }, 00:16:47.720 "dhchap_digests": [ 00:16:47.720 "sha256", 00:16:47.720 "sha384", 00:16:47.720 "sha512" 00:16:47.720 ], 00:16:47.720 "dhchap_dhgroups": [ 00:16:47.720 "null", 00:16:47.720 "ffdhe2048", 00:16:47.720 "ffdhe3072", 00:16:47.720 "ffdhe4096", 00:16:47.720 "ffdhe6144", 00:16:47.720 "ffdhe8192" 00:16:47.720 ] 00:16:47.720 } 00:16:47.720 }, 00:16:47.720 { 00:16:47.720 "method": "nvmf_set_max_subsystems", 00:16:47.720 "params": { 00:16:47.720 "max_subsystems": 1024 00:16:47.720 } 00:16:47.720 }, 00:16:47.720 { 00:16:47.720 "method": "nvmf_set_crdt", 00:16:47.720 "params": { 00:16:47.720 "crdt1": 0, 00:16:47.720 "crdt2": 0, 00:16:47.720 "crdt3": 0 00:16:47.720 } 00:16:47.720 } 00:16:47.720 ] 00:16:47.720 }, 00:16:47.720 { 00:16:47.720 "subsystem": "iscsi", 00:16:47.720 "config": [ 00:16:47.720 { 00:16:47.720 "method": "iscsi_set_options", 00:16:47.720 "params": { 00:16:47.720 "node_base": "iqn.2016-06.io.spdk", 00:16:47.720 "max_sessions": 128, 00:16:47.720 "max_connections_per_session": 2, 00:16:47.720 "max_queue_depth": 64, 00:16:47.720 "default_time2wait": 2, 00:16:47.720 "default_time2retain": 20, 00:16:47.720 "first_burst_length": 8192, 00:16:47.720 "immediate_data": true, 00:16:47.720 "allow_duplicated_isid": false, 00:16:47.720 "error_recovery_level": 0, 00:16:47.720 "nop_timeout": 60, 00:16:47.720 "nop_in_interval": 30, 00:16:47.720 "disable_chap": false, 00:16:47.720 "require_chap": false, 00:16:47.720 "mutual_chap": false, 00:16:47.720 "chap_group": 0, 00:16:47.720 "max_large_datain_per_connection": 64, 00:16:47.720 "max_r2t_per_connection": 4, 00:16:47.720 "pdu_pool_size": 36864, 00:16:47.720 "immediate_data_pool_size": 16384, 00:16:47.720 "data_out_pool_size": 2048 00:16:47.720 } 00:16:47.720 } 00:16:47.720 ] 00:16:47.720 } 00:16:47.720 ] 00:16:47.720 }' 00:16:47.720 18:26:06 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 73456 00:16:47.720 18:26:06 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 73456 ']' 00:16:47.720 18:26:06 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 73456 00:16:47.720 18:26:06 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:16:47.720 18:26:06 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:47.720 18:26:06 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73456 00:16:47.720 18:26:06 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:47.720 killing process with pid 73456 00:16:47.720 18:26:06 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:47.720 18:26:06 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73456' 00:16:47.720 18:26:06 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 73456 00:16:47.720 18:26:06 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 73456 00:16:48.740 [2024-11-20 18:26:07.322717] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:48.740 [2024-11-20 18:26:07.355212] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:48.740 [2024-11-20 18:26:07.355321] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:49.011 [2024-11-20 18:26:07.366642] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:49.011 [2024-11-20 18:26:07.366688] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:49.011 [2024-11-20 18:26:07.366699] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:49.011 [2024-11-20 18:26:07.366717] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:49.011 [2024-11-20 18:26:07.366855] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:50.393 18:26:08 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=73511 00:16:50.393 18:26:08 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 73511 00:16:50.393 18:26:08 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:16:50.393 18:26:08 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 73511 ']' 00:16:50.393 18:26:08 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:16:50.393 "subsystems": [ 00:16:50.394 { 00:16:50.394 "subsystem": "fsdev", 00:16:50.394 "config": [ 00:16:50.394 { 00:16:50.394 "method": "fsdev_set_opts", 00:16:50.394 "params": { 00:16:50.394 "fsdev_io_pool_size": 65535, 00:16:50.394 "fsdev_io_cache_size": 256 00:16:50.394 } 00:16:50.394 } 00:16:50.394 ] 00:16:50.394 }, 00:16:50.394 { 00:16:50.394 "subsystem": "keyring", 00:16:50.394 "config": [] 00:16:50.394 }, 00:16:50.394 { 00:16:50.394 "subsystem": "iobuf", 00:16:50.394 "config": [ 00:16:50.394 { 00:16:50.394 "method": "iobuf_set_options", 00:16:50.394 "params": { 00:16:50.394 "small_pool_count": 8192, 00:16:50.394 "large_pool_count": 1024, 00:16:50.394 "small_bufsize": 8192, 00:16:50.394 "large_bufsize": 135168, 00:16:50.394 "enable_numa": false 00:16:50.394 } 00:16:50.394 } 00:16:50.394 ] 00:16:50.394 }, 00:16:50.394 { 00:16:50.394 "subsystem": "sock", 00:16:50.394 "config": [ 00:16:50.394 { 00:16:50.394 "method": "sock_set_default_impl", 00:16:50.394 "params": { 00:16:50.394 "impl_name": "posix" 00:16:50.394 } 00:16:50.394 }, 00:16:50.394 { 00:16:50.394 "method": "sock_impl_set_options", 00:16:50.394 "params": { 00:16:50.394 "impl_name": "ssl", 00:16:50.394 "recv_buf_size": 4096, 00:16:50.394 "send_buf_size": 4096, 00:16:50.394 "enable_recv_pipe": true, 00:16:50.394 "enable_quickack": false, 00:16:50.394 "enable_placement_id": 0, 00:16:50.394 "enable_zerocopy_send_server": true, 00:16:50.394 "enable_zerocopy_send_client": false, 00:16:50.394 "zerocopy_threshold": 0, 00:16:50.394 "tls_version": 0, 00:16:50.394 "enable_ktls": false 00:16:50.394 } 00:16:50.394 }, 00:16:50.394 { 00:16:50.394 "method": "sock_impl_set_options", 00:16:50.394 "params": { 00:16:50.394 "impl_name": "posix", 00:16:50.394 "recv_buf_size": 2097152, 00:16:50.394 "send_buf_size": 2097152, 00:16:50.394 "enable_recv_pipe": true, 00:16:50.394 "enable_quickack": false, 00:16:50.394 "enable_placement_id": 0, 00:16:50.394 "enable_zerocopy_send_server": true, 00:16:50.394 "enable_zerocopy_send_client": false, 00:16:50.394 "zerocopy_threshold": 0, 00:16:50.394 "tls_version": 0, 00:16:50.394 "enable_ktls": false 00:16:50.394 } 00:16:50.394 } 00:16:50.394 ] 00:16:50.394 }, 00:16:50.394 { 00:16:50.394 "subsystem": "vmd", 00:16:50.394 "config": [] 00:16:50.394 }, 00:16:50.394 { 00:16:50.394 "subsystem": "accel", 00:16:50.394 "config": [ 00:16:50.394 { 00:16:50.394 "method": "accel_set_options", 00:16:50.394 "params": { 00:16:50.394 "small_cache_size": 128, 00:16:50.394 "large_cache_size": 16, 00:16:50.394 "task_count": 2048, 00:16:50.394 "sequence_count": 2048, 00:16:50.394 "buf_count": 2048 00:16:50.394 } 00:16:50.394 } 00:16:50.394 ] 00:16:50.394 }, 00:16:50.394 { 00:16:50.394 "subsystem": "bdev", 00:16:50.394 "config": [ 00:16:50.394 { 00:16:50.394 "method": "bdev_set_options", 00:16:50.394 "params": { 00:16:50.394 "bdev_io_pool_size": 65535, 00:16:50.394 "bdev_io_cache_size": 256, 00:16:50.394 "bdev_auto_examine": true, 00:16:50.394 "iobuf_small_cache_size": 128, 00:16:50.394 "iobuf_large_cache_size": 16 00:16:50.394 } 00:16:50.394 }, 00:16:50.394 { 00:16:50.394 "method": "bdev_raid_set_options", 00:16:50.394 "params": { 00:16:50.394 "process_window_size_kb": 1024, 00:16:50.394 "process_max_bandwidth_mb_sec": 0 00:16:50.394 } 00:16:50.394 }, 00:16:50.394 { 00:16:50.394 "method": "bdev_iscsi_set_options", 00:16:50.394 "params": { 00:16:50.394 "timeout_sec": 30 00:16:50.394 } 00:16:50.394 }, 00:16:50.394 { 00:16:50.394 "method": "bdev_nvme_set_options", 00:16:50.394 "params": { 00:16:50.394 "action_on_timeout": "none", 00:16:50.394 "timeout_us": 0, 00:16:50.394 "timeout_admin_us": 0, 00:16:50.394 "keep_alive_timeout_ms": 10000, 00:16:50.394 "arbitration_burst": 0, 00:16:50.394 "low_priority_weight": 0, 00:16:50.394 "medium_priority_weight": 0, 00:16:50.394 "high_priority_weight": 0, 00:16:50.394 "nvme_adminq_poll_period_us": 10000, 00:16:50.394 "nvme_ioq_poll_period_us": 0, 00:16:50.394 "io_queue_requests": 0, 00:16:50.394 "delay_cmd_submit": true, 00:16:50.394 "transport_retry_count": 4, 00:16:50.394 "bdev_retry_count": 3, 00:16:50.394 "transport_ack_timeout": 0, 00:16:50.394 "ctrlr_loss_timeout_sec": 0, 00:16:50.394 "reconnect_delay_sec": 0, 00:16:50.394 "fast_io_fail_timeout_sec": 0, 00:16:50.394 "disable_auto_failback": false, 00:16:50.394 "generate_uuids": false, 00:16:50.394 "transport_tos": 0, 00:16:50.394 "nvme_error_stat": false, 00:16:50.394 "rdma_srq_size": 0, 00:16:50.394 "io_path_stat": false, 00:16:50.394 "allow_accel_sequence": false, 00:16:50.394 "rdma_max_cq_size": 0, 00:16:50.394 "rdma_cm_event_timeout_ms": 0, 00:16:50.394 "dhchap_digests": [ 00:16:50.394 "sha256", 00:16:50.394 "sha384", 00:16:50.394 "sha512" 00:16:50.394 ], 00:16:50.394 "dhchap_dhgroups": [ 00:16:50.394 "null", 00:16:50.394 "ffdhe2048", 00:16:50.394 "ffdhe3072", 00:16:50.394 "ffdhe4096", 00:16:50.394 "ffdhe6144", 00:16:50.394 "ffdhe8192" 00:16:50.394 ] 00:16:50.394 } 00:16:50.394 }, 00:16:50.394 { 00:16:50.394 "method": "bdev_nvme_set_hotplug", 00:16:50.394 "params": { 00:16:50.394 "period_us": 100000, 00:16:50.394 "enable": false 00:16:50.394 } 00:16:50.394 }, 00:16:50.394 { 00:16:50.394 "method": "bdev_malloc_create", 00:16:50.394 "params": { 00:16:50.394 "name": "malloc0", 00:16:50.394 "num_blocks": 8192, 00:16:50.394 "block_size": 4096, 00:16:50.394 "physical_block_size": 4096, 00:16:50.394 "uuid": "6148ea22-84e9-459a-9d54-e14a83fba276", 00:16:50.394 "optimal_io_boundary": 0, 00:16:50.394 "md_size": 0, 00:16:50.394 "dif_type": 0, 00:16:50.394 "dif_is_head_of_md": false, 00:16:50.394 "dif_pi_format": 0 00:16:50.394 } 00:16:50.394 }, 00:16:50.394 { 00:16:50.394 "method": "bdev_wait_for_examine" 00:16:50.394 } 00:16:50.394 ] 00:16:50.394 }, 00:16:50.394 { 00:16:50.394 "subsystem": "scsi", 00:16:50.394 "config": null 00:16:50.394 }, 00:16:50.394 { 00:16:50.394 "subsystem": "scheduler", 00:16:50.394 "config": [ 00:16:50.394 { 00:16:50.394 "method": "framework_set_scheduler", 00:16:50.394 "params": { 00:16:50.394 "name": "static" 00:16:50.394 } 00:16:50.394 } 00:16:50.394 ] 00:16:50.394 }, 00:16:50.394 { 00:16:50.394 "subsystem": "vhost_scsi", 00:16:50.394 "config": [] 00:16:50.394 }, 00:16:50.394 { 00:16:50.394 "subsystem": "vhost_blk", 00:16:50.394 "config": [] 00:16:50.394 }, 00:16:50.394 { 00:16:50.394 "subsystem": "ublk", 00:16:50.394 "config": [ 00:16:50.394 { 00:16:50.394 "method": "ublk_create_target", 00:16:50.394 "params": { 00:16:50.394 "cpumask": "1" 00:16:50.394 } 00:16:50.394 }, 00:16:50.394 { 00:16:50.394 "method": "ublk_start_disk", 00:16:50.394 "params": { 00:16:50.394 "bdev_name": "malloc0", 00:16:50.394 "ublk_id": 0, 00:16:50.394 "num_queues": 1, 00:16:50.394 "queue_depth": 128 00:16:50.394 } 00:16:50.394 } 00:16:50.394 ] 00:16:50.394 }, 00:16:50.394 { 00:16:50.394 "subsystem": "nbd", 00:16:50.394 "config": [] 00:16:50.394 }, 00:16:50.394 { 00:16:50.394 "subsystem": "nvmf", 00:16:50.394 "config": [ 00:16:50.394 { 00:16:50.394 "method": "nvmf_set_config", 00:16:50.394 "params": { 00:16:50.394 "discovery_filter": "match_any", 00:16:50.394 "admin_cmd_passthru": { 00:16:50.394 "identify_ctrlr": false 00:16:50.394 }, 00:16:50.394 "dhchap_digests": [ 00:16:50.394 "sha256", 00:16:50.394 "sha384", 00:16:50.394 "sha512" 00:16:50.394 ], 00:16:50.394 "dhchap_dhgroups": [ 00:16:50.394 "null", 00:16:50.394 "ffdhe2048", 00:16:50.394 "ffdhe3072", 00:16:50.394 "ffdhe4096", 00:16:50.394 "ffdhe6144", 00:16:50.394 "ffdhe8192" 00:16:50.394 ] 00:16:50.394 } 00:16:50.394 }, 00:16:50.394 { 00:16:50.394 "method": "nvmf_set_max_subsystems", 00:16:50.394 "params": { 00:16:50.394 "max_subsystems": 1024 00:16:50.394 } 00:16:50.394 }, 00:16:50.394 { 00:16:50.394 "method": "nvmf_set_crdt", 00:16:50.394 "params": { 00:16:50.394 "crdt1": 0, 00:16:50.394 "crdt2": 0, 00:16:50.394 "crdt3": 0 00:16:50.394 } 00:16:50.394 } 00:16:50.394 ] 00:16:50.394 }, 00:16:50.395 { 00:16:50.395 "subsystem": "iscsi", 00:16:50.395 "config": [ 00:16:50.395 { 00:16:50.395 "method": "iscsi_set_options", 00:16:50.395 "params": { 00:16:50.395 "node_base": "iqn.2016-06.io.spdk", 00:16:50.395 "max_sessions": 128, 00:16:50.395 "max_connections_per_session": 2, 00:16:50.395 "max_queue_depth": 64, 00:16:50.395 "default_time2wait": 2, 00:16:50.395 "default_time2retain": 20, 00:16:50.395 "first_burst_length": 8192, 00:16:50.395 "immediate_data": true, 00:16:50.395 "allow_duplicated_isid": false, 00:16:50.395 "error_recovery_level": 0, 00:16:50.395 "nop_timeout": 60, 00:16:50.395 "nop_in_interval": 30, 00:16:50.395 "disable_chap": false, 00:16:50.395 "require_chap": false, 00:16:50.395 "mutual_chap": false, 00:16:50.395 "chap_group": 0, 00:16:50.395 "max_large_datain_per_connection": 64, 00:16:50.395 "max_r2t_per_connection": 4, 00:16:50.395 "pdu_pool_size": 36864, 00:16:50.395 "immediate_data_pool_size": 16384, 00:16:50.395 "data_out_pool_size": 2048 00:16:50.395 } 00:16:50.395 } 00:16:50.395 ] 00:16:50.395 } 00:16:50.395 ] 00:16:50.395 }' 00:16:50.395 18:26:08 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:50.395 18:26:08 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:50.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:50.395 18:26:08 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:50.395 18:26:08 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:50.395 18:26:08 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:50.395 [2024-11-20 18:26:08.674644] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:16:50.395 [2024-11-20 18:26:08.674778] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73511 ] 00:16:50.395 [2024-11-20 18:26:08.832611] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:50.395 [2024-11-20 18:26:08.936961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:51.336 [2024-11-20 18:26:09.634111] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:51.336 [2024-11-20 18:26:09.634788] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:51.336 [2024-11-20 18:26:09.642209] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:16:51.336 [2024-11-20 18:26:09.642275] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:16:51.336 [2024-11-20 18:26:09.642284] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:51.336 [2024-11-20 18:26:09.642290] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:51.336 [2024-11-20 18:26:09.651184] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:51.336 [2024-11-20 18:26:09.651200] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:51.336 [2024-11-20 18:26:09.658114] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:51.336 [2024-11-20 18:26:09.658195] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:51.336 [2024-11-20 18:26:09.675113] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:51.336 18:26:09 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:51.336 18:26:09 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:16:51.336 18:26:09 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:16:51.336 18:26:09 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:16:51.336 18:26:09 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.336 18:26:09 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:51.337 18:26:09 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.337 18:26:09 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:51.337 18:26:09 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:16:51.337 18:26:09 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 73511 00:16:51.337 18:26:09 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 73511 ']' 00:16:51.337 18:26:09 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 73511 00:16:51.337 18:26:09 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:16:51.337 18:26:09 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:51.337 18:26:09 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73511 00:16:51.337 killing process with pid 73511 00:16:51.337 18:26:09 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:51.337 18:26:09 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:51.337 18:26:09 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73511' 00:16:51.337 18:26:09 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 73511 00:16:51.337 18:26:09 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 73511 00:16:52.279 [2024-11-20 18:26:10.805014] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:52.279 [2024-11-20 18:26:10.850137] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:52.279 [2024-11-20 18:26:10.850236] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:52.279 [2024-11-20 18:26:10.854246] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:52.279 [2024-11-20 18:26:10.854293] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:52.279 [2024-11-20 18:26:10.854300] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:52.279 [2024-11-20 18:26:10.854322] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:52.279 [2024-11-20 18:26:10.854439] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:53.665 18:26:12 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:16:53.665 00:16:53.665 real 0m7.398s 00:16:53.665 user 0m5.005s 00:16:53.665 sys 0m3.030s 00:16:53.665 18:26:12 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:53.665 18:26:12 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:53.665 ************************************ 00:16:53.665 END TEST test_save_ublk_config 00:16:53.665 ************************************ 00:16:53.665 18:26:12 ublk -- ublk/ublk.sh@139 -- # spdk_pid=73582 00:16:53.665 18:26:12 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:53.665 18:26:12 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:53.665 18:26:12 ublk -- ublk/ublk.sh@141 -- # waitforlisten 73582 00:16:53.665 18:26:12 ublk -- common/autotest_common.sh@835 -- # '[' -z 73582 ']' 00:16:53.665 18:26:12 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:53.665 18:26:12 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:53.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:53.665 18:26:12 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:53.665 18:26:12 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:53.665 18:26:12 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:53.665 [2024-11-20 18:26:12.197944] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:16:53.665 [2024-11-20 18:26:12.198069] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73582 ] 00:16:53.926 [2024-11-20 18:26:12.355957] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:53.926 [2024-11-20 18:26:12.463226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:53.926 [2024-11-20 18:26:12.463324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:54.497 18:26:13 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:54.497 18:26:13 ublk -- common/autotest_common.sh@868 -- # return 0 00:16:54.497 18:26:13 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:16:54.497 18:26:13 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:54.497 18:26:13 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:54.497 18:26:13 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:54.497 ************************************ 00:16:54.497 START TEST test_create_ublk 00:16:54.497 ************************************ 00:16:54.497 18:26:13 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:16:54.497 18:26:13 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:16:54.497 18:26:13 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.497 18:26:13 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:54.497 [2024-11-20 18:26:13.062114] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:54.497 [2024-11-20 18:26:13.063818] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:54.497 18:26:13 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.497 18:26:13 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:16:54.497 18:26:13 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:16:54.497 18:26:13 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.497 18:26:13 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:54.759 18:26:13 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.759 18:26:13 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:16:54.759 18:26:13 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:16:54.759 18:26:13 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.759 18:26:13 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:54.759 [2024-11-20 18:26:13.230239] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:16:54.759 [2024-11-20 18:26:13.230563] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:16:54.759 [2024-11-20 18:26:13.230576] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:54.759 [2024-11-20 18:26:13.230584] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:54.759 [2024-11-20 18:26:13.239336] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:54.759 [2024-11-20 18:26:13.239358] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:54.759 [2024-11-20 18:26:13.246121] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:54.759 [2024-11-20 18:26:13.254157] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:54.759 [2024-11-20 18:26:13.271120] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:54.759 18:26:13 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.759 18:26:13 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:16:54.759 18:26:13 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:16:54.759 18:26:13 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:16:54.759 18:26:13 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.759 18:26:13 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:54.759 18:26:13 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.759 18:26:13 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:16:54.759 { 00:16:54.759 "ublk_device": "/dev/ublkb0", 00:16:54.759 "id": 0, 00:16:54.759 "queue_depth": 512, 00:16:54.759 "num_queues": 4, 00:16:54.759 "bdev_name": "Malloc0" 00:16:54.759 } 00:16:54.759 ]' 00:16:54.759 18:26:13 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:16:54.759 18:26:13 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:54.759 18:26:13 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:16:54.759 18:26:13 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:16:54.759 18:26:13 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:16:55.020 18:26:13 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:16:55.020 18:26:13 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:16:55.020 18:26:13 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:16:55.020 18:26:13 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:16:55.020 18:26:13 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:16:55.020 18:26:13 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:16:55.020 18:26:13 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:16:55.020 18:26:13 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:16:55.020 18:26:13 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:16:55.020 18:26:13 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:16:55.020 18:26:13 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:16:55.020 18:26:13 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:16:55.020 18:26:13 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:16:55.020 18:26:13 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:16:55.020 18:26:13 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:16:55.020 18:26:13 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:16:55.020 18:26:13 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:16:55.020 fio: verification read phase will never start because write phase uses all of runtime 00:16:55.020 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:16:55.020 fio-3.35 00:16:55.020 Starting 1 process 00:17:07.232 00:17:07.233 fio_test: (groupid=0, jobs=1): err= 0: pid=73628: Wed Nov 20 18:26:23 2024 00:17:07.233 write: IOPS=12.9k, BW=50.6MiB/s (53.0MB/s)(506MiB/10001msec); 0 zone resets 00:17:07.233 clat (usec): min=35, max=8153, avg=76.44, stdev=203.69 00:17:07.233 lat (usec): min=35, max=8155, avg=76.91, stdev=203.73 00:17:07.233 clat percentiles (usec): 00:17:07.233 | 1.00th=[ 51], 5.00th=[ 55], 10.00th=[ 57], 20.00th=[ 59], 00:17:07.233 | 30.00th=[ 61], 40.00th=[ 62], 50.00th=[ 63], 60.00th=[ 65], 00:17:07.233 | 70.00th=[ 67], 80.00th=[ 69], 90.00th=[ 73], 95.00th=[ 77], 00:17:07.233 | 99.00th=[ 133], 99.50th=[ 233], 99.90th=[ 3884], 99.95th=[ 4047], 00:17:07.233 | 99.99th=[ 4178] 00:17:07.233 bw ( KiB/s): min=23160, max=61568, per=99.37%, avg=51459.53, stdev=14511.90, samples=19 00:17:07.233 iops : min= 5790, max=15392, avg=12864.84, stdev=3628.04, samples=19 00:17:07.233 lat (usec) : 50=0.59%, 100=98.25%, 250=0.69%, 500=0.08%, 750=0.01% 00:17:07.233 lat (usec) : 1000=0.02% 00:17:07.233 lat (msec) : 2=0.04%, 4=0.26%, 10=0.06% 00:17:07.233 cpu : usr=2.45%, sys=8.95%, ctx=129521, majf=0, minf=795 00:17:07.233 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:07.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:07.233 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:07.233 issued rwts: total=0,129477,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:07.233 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:07.233 00:17:07.233 Run status group 0 (all jobs): 00:17:07.233 WRITE: bw=50.6MiB/s (53.0MB/s), 50.6MiB/s-50.6MiB/s (53.0MB/s-53.0MB/s), io=506MiB (530MB), run=10001-10001msec 00:17:07.233 00:17:07.233 Disk stats (read/write): 00:17:07.233 ublkb0: ios=0/127954, merge=0/0, ticks=0/8698, in_queue=8699, util=99.09% 00:17:07.233 18:26:23 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:17:07.233 18:26:23 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.233 18:26:23 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:07.233 [2024-11-20 18:26:23.690343] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:17:07.233 [2024-11-20 18:26:23.719631] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:07.233 [2024-11-20 18:26:23.720676] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:17:07.233 [2024-11-20 18:26:23.727135] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:07.233 [2024-11-20 18:26:23.727384] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:17:07.233 [2024-11-20 18:26:23.727393] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:17:07.233 18:26:23 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.233 18:26:23 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:17:07.233 18:26:23 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:17:07.233 18:26:23 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:17:07.233 18:26:23 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:07.233 18:26:23 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:07.233 18:26:23 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:07.233 18:26:23 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:07.233 18:26:23 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:17:07.233 18:26:23 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.233 18:26:23 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:07.233 [2024-11-20 18:26:23.743178] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:17:07.233 request: 00:17:07.233 { 00:17:07.233 "ublk_id": 0, 00:17:07.233 "method": "ublk_stop_disk", 00:17:07.233 "req_id": 1 00:17:07.233 } 00:17:07.233 Got JSON-RPC error response 00:17:07.233 response: 00:17:07.233 { 00:17:07.233 "code": -19, 00:17:07.233 "message": "No such device" 00:17:07.233 } 00:17:07.233 18:26:23 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:07.233 18:26:23 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:17:07.233 18:26:23 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:07.233 18:26:23 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:07.233 18:26:23 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:07.233 18:26:23 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:17:07.233 18:26:23 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.233 18:26:23 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:07.233 [2024-11-20 18:26:23.759179] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:07.233 [2024-11-20 18:26:23.767109] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:07.233 [2024-11-20 18:26:23.767140] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:17:07.233 18:26:23 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.233 18:26:23 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:17:07.233 18:26:23 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.233 18:26:23 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:07.233 18:26:24 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.233 18:26:24 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:17:07.233 18:26:24 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:17:07.233 18:26:24 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.233 18:26:24 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:07.233 18:26:24 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.233 18:26:24 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:17:07.233 18:26:24 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:17:07.233 18:26:24 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:17:07.233 18:26:24 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:17:07.233 18:26:24 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.233 18:26:24 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:07.233 18:26:24 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.233 18:26:24 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:17:07.233 18:26:24 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:17:07.233 ************************************ 00:17:07.233 END TEST test_create_ublk 00:17:07.233 ************************************ 00:17:07.233 18:26:24 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:17:07.233 00:17:07.233 real 0m11.174s 00:17:07.233 user 0m0.561s 00:17:07.233 sys 0m0.960s 00:17:07.233 18:26:24 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:07.233 18:26:24 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:07.233 18:26:24 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:17:07.233 18:26:24 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:07.233 18:26:24 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:07.233 18:26:24 ublk -- common/autotest_common.sh@10 -- # set +x 00:17:07.233 ************************************ 00:17:07.233 START TEST test_create_multi_ublk 00:17:07.233 ************************************ 00:17:07.233 18:26:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:17:07.233 18:26:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:17:07.233 18:26:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.233 18:26:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:07.233 [2024-11-20 18:26:24.275120] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:07.233 [2024-11-20 18:26:24.276690] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:07.233 18:26:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.233 18:26:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:17:07.233 18:26:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:17:07.233 18:26:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:07.233 18:26:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:17:07.233 18:26:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.233 18:26:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:07.233 18:26:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.233 18:26:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:17:07.233 18:26:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:17:07.233 18:26:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.233 18:26:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:07.233 [2024-11-20 18:26:24.491221] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:17:07.233 [2024-11-20 18:26:24.491523] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:17:07.233 [2024-11-20 18:26:24.491535] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:17:07.233 [2024-11-20 18:26:24.491543] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:17:07.233 [2024-11-20 18:26:24.515119] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:07.233 [2024-11-20 18:26:24.515139] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:07.233 [2024-11-20 18:26:24.527122] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:07.233 [2024-11-20 18:26:24.527615] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:17:07.233 [2024-11-20 18:26:24.567118] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:17:07.233 18:26:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.233 18:26:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:17:07.233 18:26:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:07.233 18:26:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:17:07.233 18:26:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.234 18:26:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:07.234 18:26:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.234 18:26:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:17:07.234 18:26:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:17:07.234 18:26:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.234 18:26:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:07.234 [2024-11-20 18:26:24.786218] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:17:07.234 [2024-11-20 18:26:24.786515] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:17:07.234 [2024-11-20 18:26:24.786523] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:17:07.234 [2024-11-20 18:26:24.786527] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:17:07.234 [2024-11-20 18:26:24.794130] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:07.234 [2024-11-20 18:26:24.794147] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:07.234 [2024-11-20 18:26:24.802120] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:07.234 [2024-11-20 18:26:24.802612] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:17:07.234 [2024-11-20 18:26:24.819139] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:17:07.234 18:26:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.234 18:26:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:17:07.234 18:26:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:07.234 18:26:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:17:07.234 18:26:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.234 18:26:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:07.234 18:26:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.234 18:26:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:17:07.234 18:26:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:17:07.234 18:26:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.234 18:26:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:07.234 [2024-11-20 18:26:24.986195] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:17:07.234 [2024-11-20 18:26:24.986494] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:17:07.234 [2024-11-20 18:26:24.986504] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:17:07.234 [2024-11-20 18:26:24.986510] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:17:07.234 [2024-11-20 18:26:24.994126] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:07.234 [2024-11-20 18:26:24.994146] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:07.234 [2024-11-20 18:26:25.002124] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:07.234 [2024-11-20 18:26:25.002616] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:17:07.234 [2024-11-20 18:26:25.019125] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:17:07.234 18:26:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.234 18:26:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:17:07.234 18:26:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:07.234 18:26:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:17:07.234 18:26:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.234 18:26:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:07.234 18:26:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.234 18:26:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:17:07.234 18:26:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:17:07.234 18:26:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.234 18:26:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:07.234 [2024-11-20 18:26:25.178212] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:17:07.234 [2024-11-20 18:26:25.178507] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:17:07.234 [2024-11-20 18:26:25.178514] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:17:07.234 [2024-11-20 18:26:25.178519] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:17:07.234 [2024-11-20 18:26:25.186137] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:07.234 [2024-11-20 18:26:25.186153] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:07.234 [2024-11-20 18:26:25.194123] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:07.234 [2024-11-20 18:26:25.194608] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:17:07.234 [2024-11-20 18:26:25.203143] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:17:07.234 18:26:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.234 18:26:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:17:07.234 18:26:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:17:07.234 18:26:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.234 18:26:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:07.234 18:26:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.234 18:26:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:17:07.234 { 00:17:07.234 "ublk_device": "/dev/ublkb0", 00:17:07.234 "id": 0, 00:17:07.234 "queue_depth": 512, 00:17:07.234 "num_queues": 4, 00:17:07.234 "bdev_name": "Malloc0" 00:17:07.234 }, 00:17:07.234 { 00:17:07.234 "ublk_device": "/dev/ublkb1", 00:17:07.234 "id": 1, 00:17:07.234 "queue_depth": 512, 00:17:07.234 "num_queues": 4, 00:17:07.234 "bdev_name": "Malloc1" 00:17:07.234 }, 00:17:07.234 { 00:17:07.234 "ublk_device": "/dev/ublkb2", 00:17:07.234 "id": 2, 00:17:07.234 "queue_depth": 512, 00:17:07.234 "num_queues": 4, 00:17:07.234 "bdev_name": "Malloc2" 00:17:07.234 }, 00:17:07.234 { 00:17:07.234 "ublk_device": "/dev/ublkb3", 00:17:07.234 "id": 3, 00:17:07.234 "queue_depth": 512, 00:17:07.234 "num_queues": 4, 00:17:07.234 "bdev_name": "Malloc3" 00:17:07.234 } 00:17:07.234 ]' 00:17:07.234 18:26:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:17:07.234 18:26:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:07.234 18:26:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:17:07.234 18:26:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:17:07.234 18:26:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:17:07.234 18:26:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:17:07.234 18:26:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:17:07.234 18:26:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:17:07.234 18:26:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:17:07.234 18:26:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:17:07.234 18:26:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:17:07.234 18:26:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:17:07.234 18:26:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:07.234 18:26:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:17:07.234 18:26:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:17:07.234 18:26:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:17:07.234 18:26:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:17:07.234 18:26:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:17:07.234 18:26:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:17:07.234 18:26:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:17:07.234 18:26:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:17:07.234 18:26:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:17:07.234 18:26:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:17:07.234 18:26:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:07.234 18:26:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:17:07.234 18:26:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:17:07.234 18:26:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:17:07.234 18:26:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:17:07.234 18:26:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:17:07.234 18:26:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:17:07.234 18:26:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:17:07.234 18:26:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:17:07.234 18:26:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:17:07.234 18:26:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:17:07.234 18:26:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:07.234 18:26:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:17:07.234 18:26:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:17:07.234 18:26:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:17:07.234 18:26:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:17:07.234 18:26:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:17:07.234 18:26:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:17:07.234 18:26:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:17:07.234 18:26:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:17:07.234 18:26:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:17:07.234 18:26:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:17:07.234 18:26:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:17:07.234 18:26:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:17:07.235 18:26:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:07.235 18:26:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:17:07.235 18:26:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.235 18:26:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:07.235 [2024-11-20 18:26:25.842210] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:17:07.492 [2024-11-20 18:26:25.887139] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:07.492 [2024-11-20 18:26:25.888052] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:17:07.493 [2024-11-20 18:26:25.898146] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:07.493 [2024-11-20 18:26:25.898394] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:17:07.493 [2024-11-20 18:26:25.898403] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:17:07.493 18:26:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.493 18:26:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:07.493 18:26:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:17:07.493 18:26:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.493 18:26:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:07.493 [2024-11-20 18:26:25.908174] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:17:07.493 [2024-11-20 18:26:25.937158] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:07.493 [2024-11-20 18:26:25.937960] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:17:07.493 [2024-11-20 18:26:25.945132] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:07.493 [2024-11-20 18:26:25.945362] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:17:07.493 [2024-11-20 18:26:25.945370] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:17:07.493 18:26:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.493 18:26:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:07.493 18:26:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:17:07.493 18:26:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.493 18:26:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:07.493 [2024-11-20 18:26:25.961185] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:17:07.493 [2024-11-20 18:26:26.001643] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:07.493 [2024-11-20 18:26:26.002713] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:17:07.493 [2024-11-20 18:26:26.009131] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:07.493 [2024-11-20 18:26:26.009367] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:17:07.493 [2024-11-20 18:26:26.009375] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:17:07.493 18:26:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.493 18:26:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:07.493 18:26:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:17:07.493 18:26:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.493 18:26:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:07.493 [2024-11-20 18:26:26.025177] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:17:07.493 [2024-11-20 18:26:26.065156] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:07.493 [2024-11-20 18:26:26.065826] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:17:07.493 [2024-11-20 18:26:26.074154] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:07.493 [2024-11-20 18:26:26.074399] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:17:07.493 [2024-11-20 18:26:26.074406] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:17:07.493 18:26:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.493 18:26:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:17:07.751 [2024-11-20 18:26:26.267174] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:07.751 [2024-11-20 18:26:26.273109] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:07.751 [2024-11-20 18:26:26.273137] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:17:07.751 18:26:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:17:07.751 18:26:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:07.751 18:26:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:17:07.751 18:26:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.751 18:26:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:08.316 18:26:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.316 18:26:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:08.316 18:26:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:17:08.316 18:26:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.316 18:26:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:08.574 18:26:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.574 18:26:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:08.574 18:26:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:17:08.574 18:26:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.575 18:26:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:08.833 18:26:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.833 18:26:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:08.833 18:26:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:17:08.833 18:26:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.833 18:26:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:08.833 18:26:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.833 18:26:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:17:08.833 18:26:27 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:17:08.833 18:26:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.833 18:26:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:08.833 18:26:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.833 18:26:27 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:17:08.833 18:26:27 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:17:08.833 18:26:27 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:17:08.833 18:26:27 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:17:08.833 18:26:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.833 18:26:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:08.833 18:26:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.833 18:26:27 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:17:08.833 18:26:27 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:17:09.091 ************************************ 00:17:09.091 END TEST test_create_multi_ublk 00:17:09.091 ************************************ 00:17:09.091 18:26:27 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:17:09.091 00:17:09.091 real 0m3.219s 00:17:09.091 user 0m0.787s 00:17:09.091 sys 0m0.146s 00:17:09.091 18:26:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:09.091 18:26:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:09.091 18:26:27 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:17:09.091 18:26:27 ublk -- ublk/ublk.sh@147 -- # cleanup 00:17:09.091 18:26:27 ublk -- ublk/ublk.sh@130 -- # killprocess 73582 00:17:09.091 18:26:27 ublk -- common/autotest_common.sh@954 -- # '[' -z 73582 ']' 00:17:09.091 18:26:27 ublk -- common/autotest_common.sh@958 -- # kill -0 73582 00:17:09.091 18:26:27 ublk -- common/autotest_common.sh@959 -- # uname 00:17:09.091 18:26:27 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:09.091 18:26:27 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73582 00:17:09.091 killing process with pid 73582 00:17:09.091 18:26:27 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:09.091 18:26:27 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:09.091 18:26:27 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73582' 00:17:09.091 18:26:27 ublk -- common/autotest_common.sh@973 -- # kill 73582 00:17:09.091 18:26:27 ublk -- common/autotest_common.sh@978 -- # wait 73582 00:17:09.657 [2024-11-20 18:26:28.062261] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:09.657 [2024-11-20 18:26:28.062309] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:10.225 00:17:10.225 real 0m24.222s 00:17:10.225 user 0m34.221s 00:17:10.225 sys 0m9.375s 00:17:10.225 18:26:28 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:10.225 ************************************ 00:17:10.225 END TEST ublk 00:17:10.225 18:26:28 ublk -- common/autotest_common.sh@10 -- # set +x 00:17:10.225 ************************************ 00:17:10.225 18:26:28 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:17:10.225 18:26:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:10.225 18:26:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:10.225 18:26:28 -- common/autotest_common.sh@10 -- # set +x 00:17:10.225 ************************************ 00:17:10.225 START TEST ublk_recovery 00:17:10.225 ************************************ 00:17:10.225 18:26:28 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:17:10.225 * Looking for test storage... 00:17:10.225 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:17:10.225 18:26:28 ublk_recovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:10.225 18:26:28 ublk_recovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:10.225 18:26:28 ublk_recovery -- common/autotest_common.sh@1693 -- # lcov --version 00:17:10.486 18:26:28 ublk_recovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:10.486 18:26:28 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:10.486 18:26:28 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:10.486 18:26:28 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:10.486 18:26:28 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:17:10.486 18:26:28 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:17:10.486 18:26:28 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:17:10.486 18:26:28 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:17:10.486 18:26:28 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:17:10.486 18:26:28 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:17:10.486 18:26:28 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:17:10.486 18:26:28 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:10.486 18:26:28 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:17:10.486 18:26:28 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:17:10.486 18:26:28 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:10.486 18:26:28 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:10.486 18:26:28 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:17:10.486 18:26:28 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:17:10.486 18:26:28 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:10.486 18:26:28 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:17:10.486 18:26:28 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:17:10.486 18:26:28 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:17:10.486 18:26:28 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:17:10.486 18:26:28 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:10.486 18:26:28 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:17:10.486 18:26:28 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:17:10.486 18:26:28 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:10.486 18:26:28 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:10.486 18:26:28 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:17:10.486 18:26:28 ublk_recovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:10.486 18:26:28 ublk_recovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:10.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.486 --rc genhtml_branch_coverage=1 00:17:10.486 --rc genhtml_function_coverage=1 00:17:10.486 --rc genhtml_legend=1 00:17:10.486 --rc geninfo_all_blocks=1 00:17:10.486 --rc geninfo_unexecuted_blocks=1 00:17:10.486 00:17:10.486 ' 00:17:10.486 18:26:28 ublk_recovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:10.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.486 --rc genhtml_branch_coverage=1 00:17:10.486 --rc genhtml_function_coverage=1 00:17:10.486 --rc genhtml_legend=1 00:17:10.486 --rc geninfo_all_blocks=1 00:17:10.486 --rc geninfo_unexecuted_blocks=1 00:17:10.486 00:17:10.486 ' 00:17:10.486 18:26:28 ublk_recovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:10.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.486 --rc genhtml_branch_coverage=1 00:17:10.486 --rc genhtml_function_coverage=1 00:17:10.486 --rc genhtml_legend=1 00:17:10.486 --rc geninfo_all_blocks=1 00:17:10.486 --rc geninfo_unexecuted_blocks=1 00:17:10.486 00:17:10.486 ' 00:17:10.486 18:26:28 ublk_recovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:10.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.486 --rc genhtml_branch_coverage=1 00:17:10.486 --rc genhtml_function_coverage=1 00:17:10.486 --rc genhtml_legend=1 00:17:10.486 --rc geninfo_all_blocks=1 00:17:10.486 --rc geninfo_unexecuted_blocks=1 00:17:10.486 00:17:10.486 ' 00:17:10.486 18:26:28 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:17:10.486 18:26:28 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:17:10.486 18:26:28 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:17:10.486 18:26:28 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:17:10.486 18:26:28 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:17:10.486 18:26:28 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:17:10.486 18:26:28 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:17:10.486 18:26:28 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:17:10.486 18:26:28 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:17:10.487 18:26:28 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:17:10.487 18:26:28 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=73977 00:17:10.487 18:26:28 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:10.487 18:26:28 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 73977 00:17:10.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:10.487 18:26:28 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 73977 ']' 00:17:10.487 18:26:28 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:10.487 18:26:28 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:10.487 18:26:28 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:10.487 18:26:28 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:17:10.487 18:26:28 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:10.487 18:26:28 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:10.487 [2024-11-20 18:26:28.995079] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:17:10.487 [2024-11-20 18:26:28.995226] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73977 ] 00:17:10.745 [2024-11-20 18:26:29.155703] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:10.745 [2024-11-20 18:26:29.246387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:10.745 [2024-11-20 18:26:29.246441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:11.310 18:26:29 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:11.310 18:26:29 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:17:11.310 18:26:29 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:17:11.310 18:26:29 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.310 18:26:29 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:11.310 [2024-11-20 18:26:29.834115] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:11.310 [2024-11-20 18:26:29.835650] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:11.310 18:26:29 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.310 18:26:29 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:17:11.310 18:26:29 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.310 18:26:29 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:11.310 malloc0 00:17:11.310 18:26:29 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.310 18:26:29 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:17:11.310 18:26:29 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.310 18:26:29 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:11.310 [2024-11-20 18:26:29.921230] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:17:11.310 [2024-11-20 18:26:29.921310] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:17:11.310 [2024-11-20 18:26:29.921319] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:17:11.310 [2024-11-20 18:26:29.921327] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:17:11.310 [2024-11-20 18:26:29.930185] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:11.310 [2024-11-20 18:26:29.930201] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:11.310 [2024-11-20 18:26:29.937117] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:11.310 [2024-11-20 18:26:29.937235] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:17:11.568 [2024-11-20 18:26:29.952132] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:17:11.568 1 00:17:11.568 18:26:29 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.568 18:26:29 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:17:12.501 18:26:30 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=74012 00:17:12.501 18:26:30 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:17:12.501 18:26:30 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:17:12.501 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:12.501 fio-3.35 00:17:12.501 Starting 1 process 00:17:17.769 18:26:35 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 73977 00:17:17.769 18:26:35 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:17:23.060 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 73977 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:17:23.061 18:26:40 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=74119 00:17:23.061 18:26:40 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:17:23.061 18:26:40 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:23.061 18:26:40 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 74119 00:17:23.061 18:26:40 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 74119 ']' 00:17:23.061 18:26:40 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:23.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:23.061 18:26:40 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:23.061 18:26:40 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:23.061 18:26:40 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:23.061 18:26:40 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:23.061 [2024-11-20 18:26:41.045035] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:17:23.061 [2024-11-20 18:26:41.045206] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74119 ] 00:17:23.061 [2024-11-20 18:26:41.203637] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:23.061 [2024-11-20 18:26:41.291624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:23.061 [2024-11-20 18:26:41.291722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:23.319 18:26:41 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:23.319 18:26:41 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:17:23.319 18:26:41 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:17:23.319 18:26:41 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.319 18:26:41 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:23.319 [2024-11-20 18:26:41.890114] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:23.319 [2024-11-20 18:26:41.891612] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:23.319 18:26:41 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.319 18:26:41 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:17:23.319 18:26:41 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.319 18:26:41 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:23.577 malloc0 00:17:23.577 18:26:41 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.577 18:26:41 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:17:23.577 18:26:41 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.577 18:26:41 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:23.577 [2024-11-20 18:26:41.970526] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:17:23.577 [2024-11-20 18:26:41.970558] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:17:23.577 [2024-11-20 18:26:41.970566] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:17:23.577 [2024-11-20 18:26:41.978140] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:17:23.577 [2024-11-20 18:26:41.978161] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 2 00:17:23.577 [2024-11-20 18:26:41.978168] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:17:23.577 [2024-11-20 18:26:41.978232] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:17:23.577 1 00:17:23.577 18:26:41 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.577 18:26:41 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 74012 00:17:23.577 [2024-11-20 18:26:41.986120] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:17:23.577 [2024-11-20 18:26:41.992669] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:17:23.577 [2024-11-20 18:26:42.000303] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:17:23.577 [2024-11-20 18:26:42.000321] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:18:19.783 00:18:19.783 fio_test: (groupid=0, jobs=1): err= 0: pid=74017: Wed Nov 20 18:27:31 2024 00:18:19.783 read: IOPS=26.9k, BW=105MiB/s (110MB/s)(6306MiB/60002msec) 00:18:19.783 slat (nsec): min=926, max=857686, avg=4960.52, stdev=2345.08 00:18:19.783 clat (usec): min=655, max=6043.7k, avg=2350.75, stdev=39202.87 00:18:19.783 lat (usec): min=660, max=6043.7k, avg=2355.71, stdev=39202.87 00:18:19.783 clat percentiles (usec): 00:18:19.783 | 1.00th=[ 1745], 5.00th=[ 1893], 10.00th=[ 1926], 20.00th=[ 1942], 00:18:19.783 | 30.00th=[ 1958], 40.00th=[ 1975], 50.00th=[ 1991], 60.00th=[ 2008], 00:18:19.783 | 70.00th=[ 2024], 80.00th=[ 2040], 90.00th=[ 2089], 95.00th=[ 2835], 00:18:19.783 | 99.00th=[ 4752], 99.50th=[ 5211], 99.90th=[ 6718], 99.95th=[ 7308], 00:18:19.783 | 99.99th=[13173] 00:18:19.783 bw ( KiB/s): min=19072, max=123440, per=100.00%, avg=118542.89, stdev=12868.24, samples=108 00:18:19.783 iops : min= 4768, max=30860, avg=29635.72, stdev=3217.06, samples=108 00:18:19.783 write: IOPS=26.9k, BW=105MiB/s (110MB/s)(6301MiB/60002msec); 0 zone resets 00:18:19.783 slat (nsec): min=984, max=873519, avg=4988.13, stdev=2419.45 00:18:19.783 clat (usec): min=661, max=6043.6k, avg=2397.69, stdev=36840.38 00:18:19.783 lat (usec): min=666, max=6043.6k, avg=2402.68, stdev=36840.38 00:18:19.783 clat percentiles (usec): 00:18:19.783 | 1.00th=[ 1778], 5.00th=[ 1975], 10.00th=[ 2008], 20.00th=[ 2040], 00:18:19.783 | 30.00th=[ 2057], 40.00th=[ 2073], 50.00th=[ 2073], 60.00th=[ 2089], 00:18:19.783 | 70.00th=[ 2114], 80.00th=[ 2147], 90.00th=[ 2180], 95.00th=[ 2769], 00:18:19.783 | 99.00th=[ 4752], 99.50th=[ 5276], 99.90th=[ 6652], 99.95th=[ 7308], 00:18:19.783 | 99.99th=[13304] 00:18:19.783 bw ( KiB/s): min=19368, max=124456, per=100.00%, avg=118442.30, stdev=12892.94, samples=108 00:18:19.783 iops : min= 4842, max=31114, avg=29610.57, stdev=3223.24, samples=108 00:18:19.783 lat (usec) : 750=0.01%, 1000=0.01% 00:18:19.783 lat (msec) : 2=32.86%, 4=64.95%, 10=2.17%, 20=0.02%, >=2000=0.01% 00:18:19.783 cpu : usr=6.03%, sys=27.46%, ctx=109743, majf=0, minf=13 00:18:19.783 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:18:19.783 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:19.783 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:19.783 issued rwts: total=1614391,1613050,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:19.783 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:19.783 00:18:19.783 Run status group 0 (all jobs): 00:18:19.783 READ: bw=105MiB/s (110MB/s), 105MiB/s-105MiB/s (110MB/s-110MB/s), io=6306MiB (6613MB), run=60002-60002msec 00:18:19.783 WRITE: bw=105MiB/s (110MB/s), 105MiB/s-105MiB/s (110MB/s-110MB/s), io=6301MiB (6607MB), run=60002-60002msec 00:18:19.783 00:18:19.783 Disk stats (read/write): 00:18:19.783 ublkb1: ios=1611111/1609711, merge=0/0, ticks=3702504/3643965, in_queue=7346469, util=99.89% 00:18:19.783 18:27:31 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:18:19.783 18:27:31 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.783 18:27:31 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:19.784 [2024-11-20 18:27:31.218042] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:18:19.784 [2024-11-20 18:27:31.248225] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:19.784 [2024-11-20 18:27:31.248359] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:18:19.784 [2024-11-20 18:27:31.256119] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:19.784 [2024-11-20 18:27:31.256213] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:18:19.784 [2024-11-20 18:27:31.256221] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:18:19.784 18:27:31 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.784 18:27:31 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:18:19.784 18:27:31 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.784 18:27:31 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:19.784 [2024-11-20 18:27:31.272189] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:18:19.784 [2024-11-20 18:27:31.280111] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:18:19.784 [2024-11-20 18:27:31.280141] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:18:19.784 18:27:31 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.784 18:27:31 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:18:19.784 18:27:31 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:18:19.784 18:27:31 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 74119 00:18:19.784 18:27:31 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 74119 ']' 00:18:19.784 18:27:31 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 74119 00:18:19.784 18:27:31 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:18:19.784 18:27:31 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:19.784 18:27:31 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74119 00:18:19.784 18:27:31 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:19.784 18:27:31 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:19.784 killing process with pid 74119 00:18:19.784 18:27:31 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74119' 00:18:19.784 18:27:31 ublk_recovery -- common/autotest_common.sh@973 -- # kill 74119 00:18:19.784 18:27:31 ublk_recovery -- common/autotest_common.sh@978 -- # wait 74119 00:18:19.784 [2024-11-20 18:27:32.332479] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:18:19.784 [2024-11-20 18:27:32.332527] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:18:19.784 00:18:19.784 real 1m4.270s 00:18:19.784 user 1m43.464s 00:18:19.784 sys 0m34.401s 00:18:19.784 18:27:33 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:19.784 18:27:33 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:19.784 ************************************ 00:18:19.784 END TEST ublk_recovery 00:18:19.784 ************************************ 00:18:19.784 18:27:33 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:18:19.784 18:27:33 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:18:19.784 18:27:33 -- spdk/autotest.sh@260 -- # timing_exit lib 00:18:19.784 18:27:33 -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:19.784 18:27:33 -- common/autotest_common.sh@10 -- # set +x 00:18:19.784 18:27:33 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:18:19.784 18:27:33 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:18:19.784 18:27:33 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:18:19.784 18:27:33 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:18:19.784 18:27:33 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:18:19.784 18:27:33 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:18:19.784 18:27:33 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:18:19.784 18:27:33 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:18:19.784 18:27:33 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:18:19.784 18:27:33 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:18:19.784 18:27:33 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:18:19.784 18:27:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:19.784 18:27:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:19.784 18:27:33 -- common/autotest_common.sh@10 -- # set +x 00:18:19.784 ************************************ 00:18:19.784 START TEST ftl 00:18:19.784 ************************************ 00:18:19.784 18:27:33 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:18:19.784 * Looking for test storage... 00:18:19.784 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:19.784 18:27:33 ftl -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:19.784 18:27:33 ftl -- common/autotest_common.sh@1693 -- # lcov --version 00:18:19.784 18:27:33 ftl -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:19.784 18:27:33 ftl -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:19.784 18:27:33 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:19.784 18:27:33 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:19.784 18:27:33 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:19.784 18:27:33 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:18:19.784 18:27:33 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:18:19.784 18:27:33 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:18:19.784 18:27:33 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:18:19.784 18:27:33 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:18:19.784 18:27:33 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:18:19.784 18:27:33 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:18:19.784 18:27:33 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:19.784 18:27:33 ftl -- scripts/common.sh@344 -- # case "$op" in 00:18:19.784 18:27:33 ftl -- scripts/common.sh@345 -- # : 1 00:18:19.784 18:27:33 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:19.784 18:27:33 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:19.784 18:27:33 ftl -- scripts/common.sh@365 -- # decimal 1 00:18:19.784 18:27:33 ftl -- scripts/common.sh@353 -- # local d=1 00:18:19.784 18:27:33 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:19.784 18:27:33 ftl -- scripts/common.sh@355 -- # echo 1 00:18:19.784 18:27:33 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:18:19.784 18:27:33 ftl -- scripts/common.sh@366 -- # decimal 2 00:18:19.784 18:27:33 ftl -- scripts/common.sh@353 -- # local d=2 00:18:19.784 18:27:33 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:19.784 18:27:33 ftl -- scripts/common.sh@355 -- # echo 2 00:18:19.784 18:27:33 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:18:19.784 18:27:33 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:19.784 18:27:33 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:19.784 18:27:33 ftl -- scripts/common.sh@368 -- # return 0 00:18:19.784 18:27:33 ftl -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:19.784 18:27:33 ftl -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:19.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:19.784 --rc genhtml_branch_coverage=1 00:18:19.784 --rc genhtml_function_coverage=1 00:18:19.784 --rc genhtml_legend=1 00:18:19.784 --rc geninfo_all_blocks=1 00:18:19.784 --rc geninfo_unexecuted_blocks=1 00:18:19.784 00:18:19.784 ' 00:18:19.784 18:27:33 ftl -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:19.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:19.784 --rc genhtml_branch_coverage=1 00:18:19.784 --rc genhtml_function_coverage=1 00:18:19.784 --rc genhtml_legend=1 00:18:19.784 --rc geninfo_all_blocks=1 00:18:19.784 --rc geninfo_unexecuted_blocks=1 00:18:19.784 00:18:19.784 ' 00:18:19.784 18:27:33 ftl -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:19.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:19.784 --rc genhtml_branch_coverage=1 00:18:19.784 --rc genhtml_function_coverage=1 00:18:19.784 --rc genhtml_legend=1 00:18:19.784 --rc geninfo_all_blocks=1 00:18:19.784 --rc geninfo_unexecuted_blocks=1 00:18:19.784 00:18:19.784 ' 00:18:19.784 18:27:33 ftl -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:19.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:19.784 --rc genhtml_branch_coverage=1 00:18:19.784 --rc genhtml_function_coverage=1 00:18:19.784 --rc genhtml_legend=1 00:18:19.784 --rc geninfo_all_blocks=1 00:18:19.784 --rc geninfo_unexecuted_blocks=1 00:18:19.784 00:18:19.784 ' 00:18:19.784 18:27:33 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:19.784 18:27:33 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:18:19.784 18:27:33 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:19.784 18:27:33 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:19.784 18:27:33 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:19.784 18:27:33 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:19.784 18:27:33 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:19.785 18:27:33 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:19.785 18:27:33 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:19.785 18:27:33 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:19.785 18:27:33 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:19.785 18:27:33 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:19.785 18:27:33 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:19.785 18:27:33 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:19.785 18:27:33 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:19.785 18:27:33 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:19.785 18:27:33 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:19.785 18:27:33 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:19.785 18:27:33 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:19.785 18:27:33 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:19.785 18:27:33 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:19.785 18:27:33 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:19.785 18:27:33 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:19.785 18:27:33 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:19.785 18:27:33 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:19.785 18:27:33 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:19.785 18:27:33 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:19.785 18:27:33 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:19.785 18:27:33 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:19.785 18:27:33 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:19.785 18:27:33 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:18:19.785 18:27:33 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:18:19.785 18:27:33 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:18:19.785 18:27:33 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:18:19.785 18:27:33 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:19.785 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:19.785 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:19.785 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:19.785 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:19.785 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:19.785 18:27:33 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=74919 00:18:19.785 18:27:33 ftl -- ftl/ftl.sh@38 -- # waitforlisten 74919 00:18:19.785 18:27:33 ftl -- common/autotest_common.sh@835 -- # '[' -z 74919 ']' 00:18:19.785 18:27:33 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:19.785 18:27:33 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:19.785 18:27:33 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:19.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:19.785 18:27:33 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:19.785 18:27:33 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:19.785 18:27:33 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:18:19.785 [2024-11-20 18:27:33.842615] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:18:19.785 [2024-11-20 18:27:33.842736] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74919 ] 00:18:19.785 [2024-11-20 18:27:34.004783] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.785 [2024-11-20 18:27:34.107211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:19.785 18:27:34 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:19.785 18:27:34 ftl -- common/autotest_common.sh@868 -- # return 0 00:18:19.785 18:27:34 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:18:19.785 18:27:34 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:18:19.785 18:27:35 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:18:19.785 18:27:35 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:18:19.785 18:27:36 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:18:19.785 18:27:36 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:18:19.785 18:27:36 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:18:19.785 18:27:36 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:18:19.785 18:27:36 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:18:19.785 18:27:36 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:18:19.785 18:27:36 ftl -- ftl/ftl.sh@50 -- # break 00:18:19.785 18:27:36 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:18:19.785 18:27:36 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:18:19.785 18:27:36 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:18:19.785 18:27:36 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:18:19.785 18:27:36 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:18:19.785 18:27:36 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:18:19.785 18:27:36 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:18:19.785 18:27:36 ftl -- ftl/ftl.sh@63 -- # break 00:18:19.785 18:27:36 ftl -- ftl/ftl.sh@66 -- # killprocess 74919 00:18:19.785 18:27:36 ftl -- common/autotest_common.sh@954 -- # '[' -z 74919 ']' 00:18:19.785 18:27:36 ftl -- common/autotest_common.sh@958 -- # kill -0 74919 00:18:19.785 18:27:36 ftl -- common/autotest_common.sh@959 -- # uname 00:18:19.785 18:27:36 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:19.785 18:27:36 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74919 00:18:19.785 killing process with pid 74919 00:18:19.785 18:27:36 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:19.785 18:27:36 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:19.785 18:27:36 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74919' 00:18:19.785 18:27:36 ftl -- common/autotest_common.sh@973 -- # kill 74919 00:18:19.785 18:27:36 ftl -- common/autotest_common.sh@978 -- # wait 74919 00:18:19.785 18:27:38 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:18:19.785 18:27:38 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:18:19.785 18:27:38 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:19.785 18:27:38 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:19.785 18:27:38 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:19.785 ************************************ 00:18:19.785 START TEST ftl_fio_basic 00:18:19.785 ************************************ 00:18:19.785 18:27:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:18:19.785 * Looking for test storage... 00:18:19.785 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:19.785 18:27:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:19.785 18:27:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lcov --version 00:18:19.785 18:27:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:19.785 18:27:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:19.785 18:27:38 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:19.785 18:27:38 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:19.785 18:27:38 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:19.785 18:27:38 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:18:19.785 18:27:38 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:18:19.785 18:27:38 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:18:19.785 18:27:38 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:18:19.785 18:27:38 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:18:19.785 18:27:38 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:18:19.785 18:27:38 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:18:19.785 18:27:38 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:19.785 18:27:38 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:18:19.785 18:27:38 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:18:19.785 18:27:38 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:19.785 18:27:38 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:19.785 18:27:38 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:18:19.785 18:27:38 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:18:19.785 18:27:38 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:19.785 18:27:38 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:18:19.785 18:27:38 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:18:19.785 18:27:38 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:18:19.785 18:27:38 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:18:19.786 18:27:38 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:19.786 18:27:38 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:18:19.786 18:27:38 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:18:19.786 18:27:38 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:19.786 18:27:38 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:19.786 18:27:38 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:18:19.786 18:27:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:19.786 18:27:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:19.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:19.786 --rc genhtml_branch_coverage=1 00:18:19.786 --rc genhtml_function_coverage=1 00:18:19.786 --rc genhtml_legend=1 00:18:19.786 --rc geninfo_all_blocks=1 00:18:19.786 --rc geninfo_unexecuted_blocks=1 00:18:19.786 00:18:19.786 ' 00:18:19.786 18:27:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:19.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:19.786 --rc genhtml_branch_coverage=1 00:18:19.786 --rc genhtml_function_coverage=1 00:18:19.786 --rc genhtml_legend=1 00:18:19.786 --rc geninfo_all_blocks=1 00:18:19.786 --rc geninfo_unexecuted_blocks=1 00:18:19.786 00:18:19.786 ' 00:18:19.786 18:27:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:19.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:19.786 --rc genhtml_branch_coverage=1 00:18:19.786 --rc genhtml_function_coverage=1 00:18:19.786 --rc genhtml_legend=1 00:18:19.786 --rc geninfo_all_blocks=1 00:18:19.786 --rc geninfo_unexecuted_blocks=1 00:18:19.786 00:18:19.786 ' 00:18:19.786 18:27:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:19.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:19.786 --rc genhtml_branch_coverage=1 00:18:19.786 --rc genhtml_function_coverage=1 00:18:19.786 --rc genhtml_legend=1 00:18:19.786 --rc geninfo_all_blocks=1 00:18:19.786 --rc geninfo_unexecuted_blocks=1 00:18:19.786 00:18:19.786 ' 00:18:19.786 18:27:38 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:19.786 18:27:38 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:18:19.786 18:27:38 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:19.786 18:27:38 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:19.786 18:27:38 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:19.786 18:27:38 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:19.786 18:27:38 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:19.786 18:27:38 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:19.786 18:27:38 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:19.786 18:27:38 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:19.786 18:27:38 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:19.786 18:27:38 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:19.786 18:27:38 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:19.786 18:27:38 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:19.786 18:27:38 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:19.786 18:27:38 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:19.786 18:27:38 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:19.786 18:27:38 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:19.786 18:27:38 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:19.786 18:27:38 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:19.786 18:27:38 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:19.786 18:27:38 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:19.786 18:27:38 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:19.786 18:27:38 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:19.786 18:27:38 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:19.786 18:27:38 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:19.786 18:27:38 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:19.786 18:27:38 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:19.786 18:27:38 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:19.786 18:27:38 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:18:19.786 18:27:38 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:18:19.786 18:27:38 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:18:19.786 18:27:38 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:18:19.786 18:27:38 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:19.786 18:27:38 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:18:19.786 18:27:38 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:18:19.786 18:27:38 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:18:19.786 18:27:38 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:18:19.786 18:27:38 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:18:19.786 18:27:38 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:18:19.786 18:27:38 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:18:19.786 18:27:38 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:18:19.786 18:27:38 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:18:19.786 18:27:38 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:19.786 18:27:38 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:19.786 18:27:38 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:18:19.786 18:27:38 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=75057 00:18:19.786 18:27:38 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 75057 00:18:19.786 18:27:38 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:18:19.786 18:27:38 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 75057 ']' 00:18:19.786 18:27:38 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:19.786 18:27:38 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:19.786 18:27:38 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:19.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:19.786 18:27:38 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:19.786 18:27:38 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:19.786 [2024-11-20 18:27:38.407213] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:18:20.106 [2024-11-20 18:27:38.408075] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75057 ] 00:18:20.106 [2024-11-20 18:27:38.574141] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:20.106 [2024-11-20 18:27:38.701632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:20.106 [2024-11-20 18:27:38.702339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:20.106 [2024-11-20 18:27:38.702447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:20.687 18:27:39 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:20.687 18:27:39 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:18:20.687 18:27:39 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:20.687 18:27:39 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:18:20.687 18:27:39 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:20.687 18:27:39 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:18:20.687 18:27:39 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:18:20.687 18:27:39 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:20.944 18:27:39 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:20.944 18:27:39 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:18:20.944 18:27:39 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:20.944 18:27:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:18:20.944 18:27:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:20.944 18:27:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:18:20.944 18:27:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:18:20.944 18:27:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:21.202 18:27:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:21.202 { 00:18:21.202 "name": "nvme0n1", 00:18:21.202 "aliases": [ 00:18:21.202 "c3cadab2-baca-4d96-b36d-0c060ce80088" 00:18:21.202 ], 00:18:21.202 "product_name": "NVMe disk", 00:18:21.202 "block_size": 4096, 00:18:21.202 "num_blocks": 1310720, 00:18:21.202 "uuid": "c3cadab2-baca-4d96-b36d-0c060ce80088", 00:18:21.202 "numa_id": -1, 00:18:21.203 "assigned_rate_limits": { 00:18:21.203 "rw_ios_per_sec": 0, 00:18:21.203 "rw_mbytes_per_sec": 0, 00:18:21.203 "r_mbytes_per_sec": 0, 00:18:21.203 "w_mbytes_per_sec": 0 00:18:21.203 }, 00:18:21.203 "claimed": false, 00:18:21.203 "zoned": false, 00:18:21.203 "supported_io_types": { 00:18:21.203 "read": true, 00:18:21.203 "write": true, 00:18:21.203 "unmap": true, 00:18:21.203 "flush": true, 00:18:21.203 "reset": true, 00:18:21.203 "nvme_admin": true, 00:18:21.203 "nvme_io": true, 00:18:21.203 "nvme_io_md": false, 00:18:21.203 "write_zeroes": true, 00:18:21.203 "zcopy": false, 00:18:21.203 "get_zone_info": false, 00:18:21.203 "zone_management": false, 00:18:21.203 "zone_append": false, 00:18:21.203 "compare": true, 00:18:21.203 "compare_and_write": false, 00:18:21.203 "abort": true, 00:18:21.203 "seek_hole": false, 00:18:21.203 "seek_data": false, 00:18:21.203 "copy": true, 00:18:21.203 "nvme_iov_md": false 00:18:21.203 }, 00:18:21.203 "driver_specific": { 00:18:21.203 "nvme": [ 00:18:21.203 { 00:18:21.203 "pci_address": "0000:00:11.0", 00:18:21.203 "trid": { 00:18:21.203 "trtype": "PCIe", 00:18:21.203 "traddr": "0000:00:11.0" 00:18:21.203 }, 00:18:21.203 "ctrlr_data": { 00:18:21.203 "cntlid": 0, 00:18:21.203 "vendor_id": "0x1b36", 00:18:21.203 "model_number": "QEMU NVMe Ctrl", 00:18:21.203 "serial_number": "12341", 00:18:21.203 "firmware_revision": "8.0.0", 00:18:21.203 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:21.203 "oacs": { 00:18:21.203 "security": 0, 00:18:21.203 "format": 1, 00:18:21.203 "firmware": 0, 00:18:21.203 "ns_manage": 1 00:18:21.203 }, 00:18:21.203 "multi_ctrlr": false, 00:18:21.203 "ana_reporting": false 00:18:21.203 }, 00:18:21.203 "vs": { 00:18:21.203 "nvme_version": "1.4" 00:18:21.203 }, 00:18:21.203 "ns_data": { 00:18:21.203 "id": 1, 00:18:21.203 "can_share": false 00:18:21.203 } 00:18:21.203 } 00:18:21.203 ], 00:18:21.203 "mp_policy": "active_passive" 00:18:21.203 } 00:18:21.203 } 00:18:21.203 ]' 00:18:21.203 18:27:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:21.203 18:27:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:18:21.203 18:27:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:21.461 18:27:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:18:21.461 18:27:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:18:21.461 18:27:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:18:21.461 18:27:39 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:18:21.461 18:27:39 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:21.461 18:27:39 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:18:21.461 18:27:39 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:21.461 18:27:39 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:21.461 18:27:40 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:18:21.461 18:27:40 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:21.719 18:27:40 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=78d35d9b-3585-4a66-bf8d-aa5cbe024f8b 00:18:21.719 18:27:40 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 78d35d9b-3585-4a66-bf8d-aa5cbe024f8b 00:18:21.977 18:27:40 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=e1358bb1-eda4-4c8a-8dd0-5b3cd23f935d 00:18:21.977 18:27:40 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 e1358bb1-eda4-4c8a-8dd0-5b3cd23f935d 00:18:21.977 18:27:40 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:18:21.977 18:27:40 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:21.977 18:27:40 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=e1358bb1-eda4-4c8a-8dd0-5b3cd23f935d 00:18:21.977 18:27:40 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:18:21.977 18:27:40 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size e1358bb1-eda4-4c8a-8dd0-5b3cd23f935d 00:18:21.977 18:27:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=e1358bb1-eda4-4c8a-8dd0-5b3cd23f935d 00:18:21.977 18:27:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:21.977 18:27:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:18:21.977 18:27:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:18:21.977 18:27:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e1358bb1-eda4-4c8a-8dd0-5b3cd23f935d 00:18:22.235 18:27:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:22.235 { 00:18:22.235 "name": "e1358bb1-eda4-4c8a-8dd0-5b3cd23f935d", 00:18:22.235 "aliases": [ 00:18:22.235 "lvs/nvme0n1p0" 00:18:22.235 ], 00:18:22.235 "product_name": "Logical Volume", 00:18:22.235 "block_size": 4096, 00:18:22.235 "num_blocks": 26476544, 00:18:22.235 "uuid": "e1358bb1-eda4-4c8a-8dd0-5b3cd23f935d", 00:18:22.235 "assigned_rate_limits": { 00:18:22.235 "rw_ios_per_sec": 0, 00:18:22.235 "rw_mbytes_per_sec": 0, 00:18:22.235 "r_mbytes_per_sec": 0, 00:18:22.235 "w_mbytes_per_sec": 0 00:18:22.235 }, 00:18:22.235 "claimed": false, 00:18:22.235 "zoned": false, 00:18:22.235 "supported_io_types": { 00:18:22.235 "read": true, 00:18:22.235 "write": true, 00:18:22.235 "unmap": true, 00:18:22.235 "flush": false, 00:18:22.235 "reset": true, 00:18:22.235 "nvme_admin": false, 00:18:22.235 "nvme_io": false, 00:18:22.235 "nvme_io_md": false, 00:18:22.235 "write_zeroes": true, 00:18:22.235 "zcopy": false, 00:18:22.235 "get_zone_info": false, 00:18:22.235 "zone_management": false, 00:18:22.235 "zone_append": false, 00:18:22.235 "compare": false, 00:18:22.235 "compare_and_write": false, 00:18:22.235 "abort": false, 00:18:22.235 "seek_hole": true, 00:18:22.235 "seek_data": true, 00:18:22.235 "copy": false, 00:18:22.235 "nvme_iov_md": false 00:18:22.235 }, 00:18:22.235 "driver_specific": { 00:18:22.235 "lvol": { 00:18:22.235 "lvol_store_uuid": "78d35d9b-3585-4a66-bf8d-aa5cbe024f8b", 00:18:22.235 "base_bdev": "nvme0n1", 00:18:22.235 "thin_provision": true, 00:18:22.235 "num_allocated_clusters": 0, 00:18:22.235 "snapshot": false, 00:18:22.235 "clone": false, 00:18:22.235 "esnap_clone": false 00:18:22.235 } 00:18:22.235 } 00:18:22.235 } 00:18:22.235 ]' 00:18:22.235 18:27:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:22.235 18:27:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:18:22.235 18:27:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:22.235 18:27:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:22.235 18:27:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:22.235 18:27:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:18:22.235 18:27:40 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:18:22.235 18:27:40 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:18:22.235 18:27:40 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:22.492 18:27:40 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:22.492 18:27:40 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:22.492 18:27:40 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size e1358bb1-eda4-4c8a-8dd0-5b3cd23f935d 00:18:22.492 18:27:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=e1358bb1-eda4-4c8a-8dd0-5b3cd23f935d 00:18:22.492 18:27:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:22.492 18:27:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:18:22.492 18:27:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:18:22.492 18:27:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e1358bb1-eda4-4c8a-8dd0-5b3cd23f935d 00:18:22.751 18:27:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:22.751 { 00:18:22.751 "name": "e1358bb1-eda4-4c8a-8dd0-5b3cd23f935d", 00:18:22.751 "aliases": [ 00:18:22.751 "lvs/nvme0n1p0" 00:18:22.751 ], 00:18:22.751 "product_name": "Logical Volume", 00:18:22.751 "block_size": 4096, 00:18:22.751 "num_blocks": 26476544, 00:18:22.751 "uuid": "e1358bb1-eda4-4c8a-8dd0-5b3cd23f935d", 00:18:22.751 "assigned_rate_limits": { 00:18:22.751 "rw_ios_per_sec": 0, 00:18:22.751 "rw_mbytes_per_sec": 0, 00:18:22.751 "r_mbytes_per_sec": 0, 00:18:22.751 "w_mbytes_per_sec": 0 00:18:22.751 }, 00:18:22.751 "claimed": false, 00:18:22.751 "zoned": false, 00:18:22.751 "supported_io_types": { 00:18:22.751 "read": true, 00:18:22.751 "write": true, 00:18:22.751 "unmap": true, 00:18:22.751 "flush": false, 00:18:22.751 "reset": true, 00:18:22.751 "nvme_admin": false, 00:18:22.751 "nvme_io": false, 00:18:22.751 "nvme_io_md": false, 00:18:22.751 "write_zeroes": true, 00:18:22.751 "zcopy": false, 00:18:22.751 "get_zone_info": false, 00:18:22.751 "zone_management": false, 00:18:22.751 "zone_append": false, 00:18:22.751 "compare": false, 00:18:22.751 "compare_and_write": false, 00:18:22.751 "abort": false, 00:18:22.751 "seek_hole": true, 00:18:22.751 "seek_data": true, 00:18:22.751 "copy": false, 00:18:22.751 "nvme_iov_md": false 00:18:22.751 }, 00:18:22.751 "driver_specific": { 00:18:22.751 "lvol": { 00:18:22.751 "lvol_store_uuid": "78d35d9b-3585-4a66-bf8d-aa5cbe024f8b", 00:18:22.751 "base_bdev": "nvme0n1", 00:18:22.751 "thin_provision": true, 00:18:22.751 "num_allocated_clusters": 0, 00:18:22.751 "snapshot": false, 00:18:22.751 "clone": false, 00:18:22.751 "esnap_clone": false 00:18:22.751 } 00:18:22.751 } 00:18:22.751 } 00:18:22.751 ]' 00:18:22.751 18:27:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:22.751 18:27:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:18:22.751 18:27:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:22.751 18:27:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:22.751 18:27:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:22.751 18:27:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:18:22.751 18:27:41 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:18:22.751 18:27:41 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:23.009 18:27:41 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:18:23.009 18:27:41 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:18:23.009 18:27:41 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:18:23.009 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:18:23.009 18:27:41 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size e1358bb1-eda4-4c8a-8dd0-5b3cd23f935d 00:18:23.009 18:27:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=e1358bb1-eda4-4c8a-8dd0-5b3cd23f935d 00:18:23.009 18:27:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:23.009 18:27:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:18:23.009 18:27:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:18:23.009 18:27:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e1358bb1-eda4-4c8a-8dd0-5b3cd23f935d 00:18:23.267 18:27:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:23.267 { 00:18:23.267 "name": "e1358bb1-eda4-4c8a-8dd0-5b3cd23f935d", 00:18:23.267 "aliases": [ 00:18:23.267 "lvs/nvme0n1p0" 00:18:23.267 ], 00:18:23.267 "product_name": "Logical Volume", 00:18:23.267 "block_size": 4096, 00:18:23.267 "num_blocks": 26476544, 00:18:23.268 "uuid": "e1358bb1-eda4-4c8a-8dd0-5b3cd23f935d", 00:18:23.268 "assigned_rate_limits": { 00:18:23.268 "rw_ios_per_sec": 0, 00:18:23.268 "rw_mbytes_per_sec": 0, 00:18:23.268 "r_mbytes_per_sec": 0, 00:18:23.268 "w_mbytes_per_sec": 0 00:18:23.268 }, 00:18:23.268 "claimed": false, 00:18:23.268 "zoned": false, 00:18:23.268 "supported_io_types": { 00:18:23.268 "read": true, 00:18:23.268 "write": true, 00:18:23.268 "unmap": true, 00:18:23.268 "flush": false, 00:18:23.268 "reset": true, 00:18:23.268 "nvme_admin": false, 00:18:23.268 "nvme_io": false, 00:18:23.268 "nvme_io_md": false, 00:18:23.268 "write_zeroes": true, 00:18:23.268 "zcopy": false, 00:18:23.268 "get_zone_info": false, 00:18:23.268 "zone_management": false, 00:18:23.268 "zone_append": false, 00:18:23.268 "compare": false, 00:18:23.268 "compare_and_write": false, 00:18:23.268 "abort": false, 00:18:23.268 "seek_hole": true, 00:18:23.268 "seek_data": true, 00:18:23.268 "copy": false, 00:18:23.268 "nvme_iov_md": false 00:18:23.268 }, 00:18:23.268 "driver_specific": { 00:18:23.268 "lvol": { 00:18:23.268 "lvol_store_uuid": "78d35d9b-3585-4a66-bf8d-aa5cbe024f8b", 00:18:23.268 "base_bdev": "nvme0n1", 00:18:23.268 "thin_provision": true, 00:18:23.268 "num_allocated_clusters": 0, 00:18:23.268 "snapshot": false, 00:18:23.268 "clone": false, 00:18:23.268 "esnap_clone": false 00:18:23.268 } 00:18:23.268 } 00:18:23.268 } 00:18:23.268 ]' 00:18:23.268 18:27:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:23.268 18:27:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:18:23.268 18:27:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:23.268 18:27:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:23.268 18:27:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:23.268 18:27:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:18:23.268 18:27:41 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:18:23.268 18:27:41 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:18:23.268 18:27:41 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d e1358bb1-eda4-4c8a-8dd0-5b3cd23f935d -c nvc0n1p0 --l2p_dram_limit 60 00:18:23.268 [2024-11-20 18:27:41.883832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.268 [2024-11-20 18:27:41.883866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:23.268 [2024-11-20 18:27:41.883879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:23.268 [2024-11-20 18:27:41.883886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.268 [2024-11-20 18:27:41.883933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.268 [2024-11-20 18:27:41.883943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:23.268 [2024-11-20 18:27:41.883951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:18:23.268 [2024-11-20 18:27:41.883957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.268 [2024-11-20 18:27:41.883986] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:23.268 [2024-11-20 18:27:41.884556] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:23.268 [2024-11-20 18:27:41.884583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.268 [2024-11-20 18:27:41.884590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:23.268 [2024-11-20 18:27:41.884598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.603 ms 00:18:23.268 [2024-11-20 18:27:41.884606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.268 [2024-11-20 18:27:41.884667] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 67443721-24d1-4037-960f-7a0df8d4deb7 00:18:23.268 [2024-11-20 18:27:41.885747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.268 [2024-11-20 18:27:41.885771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:23.268 [2024-11-20 18:27:41.885780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:18:23.268 [2024-11-20 18:27:41.885787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.268 [2024-11-20 18:27:41.890985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.268 [2024-11-20 18:27:41.891070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:23.268 [2024-11-20 18:27:41.891120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.158 ms 00:18:23.268 [2024-11-20 18:27:41.891153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.268 [2024-11-20 18:27:41.891245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.268 [2024-11-20 18:27:41.891270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:23.268 [2024-11-20 18:27:41.891288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:18:23.268 [2024-11-20 18:27:41.891307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.268 [2024-11-20 18:27:41.891407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.268 [2024-11-20 18:27:41.891436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:23.268 [2024-11-20 18:27:41.891458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:23.268 [2024-11-20 18:27:41.891476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.268 [2024-11-20 18:27:41.891508] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:23.268 [2024-11-20 18:27:41.894498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.268 [2024-11-20 18:27:41.894576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:23.268 [2024-11-20 18:27:41.894621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.993 ms 00:18:23.268 [2024-11-20 18:27:41.894641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.268 [2024-11-20 18:27:41.894683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.268 [2024-11-20 18:27:41.894704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:23.268 [2024-11-20 18:27:41.894722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:23.268 [2024-11-20 18:27:41.894765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.268 [2024-11-20 18:27:41.894802] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:23.268 [2024-11-20 18:27:41.894930] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:23.527 [2024-11-20 18:27:41.895000] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:23.527 [2024-11-20 18:27:41.895056] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:23.527 [2024-11-20 18:27:41.895088] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:23.527 [2024-11-20 18:27:41.895127] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:23.527 [2024-11-20 18:27:41.895160] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:23.527 [2024-11-20 18:27:41.895176] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:23.527 [2024-11-20 18:27:41.895194] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:23.527 [2024-11-20 18:27:41.895209] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:23.527 [2024-11-20 18:27:41.895310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.527 [2024-11-20 18:27:41.895331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:23.528 [2024-11-20 18:27:41.895350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.509 ms 00:18:23.528 [2024-11-20 18:27:41.895415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.528 [2024-11-20 18:27:41.895500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.528 [2024-11-20 18:27:41.895522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:23.528 [2024-11-20 18:27:41.895562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:18:23.528 [2024-11-20 18:27:41.895580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.528 [2024-11-20 18:27:41.895693] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:23.528 [2024-11-20 18:27:41.895718] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:23.528 [2024-11-20 18:27:41.895738] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:23.528 [2024-11-20 18:27:41.895780] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:23.528 [2024-11-20 18:27:41.895801] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:23.528 [2024-11-20 18:27:41.895844] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:23.528 [2024-11-20 18:27:41.895863] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:23.528 [2024-11-20 18:27:41.895882] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:23.528 [2024-11-20 18:27:41.895899] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:23.528 [2024-11-20 18:27:41.895915] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:23.528 [2024-11-20 18:27:41.895932] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:23.528 [2024-11-20 18:27:41.895981] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:23.528 [2024-11-20 18:27:41.896000] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:23.528 [2024-11-20 18:27:41.896015] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:23.528 [2024-11-20 18:27:41.896031] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:23.528 [2024-11-20 18:27:41.896047] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:23.528 [2024-11-20 18:27:41.896065] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:23.528 [2024-11-20 18:27:41.896135] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:23.528 [2024-11-20 18:27:41.896158] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:23.528 [2024-11-20 18:27:41.896174] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:23.528 [2024-11-20 18:27:41.896191] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:23.528 [2024-11-20 18:27:41.896206] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:23.528 [2024-11-20 18:27:41.896257] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:23.528 [2024-11-20 18:27:41.896275] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:23.528 [2024-11-20 18:27:41.896291] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:23.528 [2024-11-20 18:27:41.896306] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:23.528 [2024-11-20 18:27:41.896322] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:23.528 [2024-11-20 18:27:41.896337] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:23.528 [2024-11-20 18:27:41.896378] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:23.528 [2024-11-20 18:27:41.896396] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:23.528 [2024-11-20 18:27:41.896413] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:23.528 [2024-11-20 18:27:41.896427] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:23.528 [2024-11-20 18:27:41.896447] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:23.528 [2024-11-20 18:27:41.896461] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:23.528 [2024-11-20 18:27:41.896485] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:23.528 [2024-11-20 18:27:41.896539] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:23.528 [2024-11-20 18:27:41.896559] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:23.528 [2024-11-20 18:27:41.896575] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:23.528 [2024-11-20 18:27:41.896592] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:23.528 [2024-11-20 18:27:41.896607] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:23.528 [2024-11-20 18:27:41.896622] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:23.528 [2024-11-20 18:27:41.896637] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:23.528 [2024-11-20 18:27:41.896704] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:23.528 [2024-11-20 18:27:41.896722] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:23.528 [2024-11-20 18:27:41.896743] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:23.528 [2024-11-20 18:27:41.896759] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:23.528 [2024-11-20 18:27:41.896777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:23.528 [2024-11-20 18:27:41.896823] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:23.528 [2024-11-20 18:27:41.896843] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:23.528 [2024-11-20 18:27:41.896859] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:23.528 [2024-11-20 18:27:41.896875] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:23.528 [2024-11-20 18:27:41.896890] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:23.528 [2024-11-20 18:27:41.896907] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:23.528 [2024-11-20 18:27:41.896958] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:23.528 [2024-11-20 18:27:41.896992] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:23.528 [2024-11-20 18:27:41.897020] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:23.528 [2024-11-20 18:27:41.897045] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:23.528 [2024-11-20 18:27:41.897113] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:23.528 [2024-11-20 18:27:41.897146] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:23.528 [2024-11-20 18:27:41.897173] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:23.528 [2024-11-20 18:27:41.897236] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:23.528 [2024-11-20 18:27:41.897265] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:23.528 [2024-11-20 18:27:41.897293] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:23.528 [2024-11-20 18:27:41.897321] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:23.528 [2024-11-20 18:27:41.897389] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:23.528 [2024-11-20 18:27:41.897418] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:23.528 [2024-11-20 18:27:41.897445] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:23.528 [2024-11-20 18:27:41.897501] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:23.528 [2024-11-20 18:27:41.897533] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:23.528 [2024-11-20 18:27:41.897561] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:23.528 [2024-11-20 18:27:41.897587] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:23.528 [2024-11-20 18:27:41.897629] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:23.528 [2024-11-20 18:27:41.897658] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:23.528 [2024-11-20 18:27:41.897686] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:23.528 [2024-11-20 18:27:41.897711] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:23.528 [2024-11-20 18:27:41.897767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.528 [2024-11-20 18:27:41.897787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:23.528 [2024-11-20 18:27:41.897803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.128 ms 00:18:23.528 [2024-11-20 18:27:41.897821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.528 [2024-11-20 18:27:41.897884] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:23.528 [2024-11-20 18:27:41.897917] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:25.429 [2024-11-20 18:27:44.055867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.429 [2024-11-20 18:27:44.056051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:25.429 [2024-11-20 18:27:44.056166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2157.974 ms 00:18:25.429 [2024-11-20 18:27:44.056190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.687 [2024-11-20 18:27:44.079647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.687 [2024-11-20 18:27:44.079798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:25.687 [2024-11-20 18:27:44.079869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.268 ms 00:18:25.687 [2024-11-20 18:27:44.079893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.688 [2024-11-20 18:27:44.080019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.688 [2024-11-20 18:27:44.080087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:25.688 [2024-11-20 18:27:44.080111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:18:25.688 [2024-11-20 18:27:44.080123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.688 [2024-11-20 18:27:44.126313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.688 [2024-11-20 18:27:44.126413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:25.688 [2024-11-20 18:27:44.126460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.145 ms 00:18:25.688 [2024-11-20 18:27:44.126498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.688 [2024-11-20 18:27:44.126609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.688 [2024-11-20 18:27:44.126653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:25.688 [2024-11-20 18:27:44.126688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:25.688 [2024-11-20 18:27:44.126725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.688 [2024-11-20 18:27:44.127522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.688 [2024-11-20 18:27:44.127597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:25.688 [2024-11-20 18:27:44.127633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.633 ms 00:18:25.688 [2024-11-20 18:27:44.127675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.688 [2024-11-20 18:27:44.128024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.688 [2024-11-20 18:27:44.128064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:25.688 [2024-11-20 18:27:44.128121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:18:25.688 [2024-11-20 18:27:44.128163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.688 [2024-11-20 18:27:44.142559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.688 [2024-11-20 18:27:44.142611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:25.688 [2024-11-20 18:27:44.142633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.325 ms 00:18:25.688 [2024-11-20 18:27:44.142651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.688 [2024-11-20 18:27:44.152831] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:25.688 [2024-11-20 18:27:44.168416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.688 [2024-11-20 18:27:44.168524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:25.688 [2024-11-20 18:27:44.168570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.613 ms 00:18:25.688 [2024-11-20 18:27:44.168607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.688 [2024-11-20 18:27:44.211716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.688 [2024-11-20 18:27:44.211819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:25.688 [2024-11-20 18:27:44.211873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.067 ms 00:18:25.688 [2024-11-20 18:27:44.211893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.688 [2024-11-20 18:27:44.212059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.688 [2024-11-20 18:27:44.212086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:25.688 [2024-11-20 18:27:44.212151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:18:25.688 [2024-11-20 18:27:44.212174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.688 [2024-11-20 18:27:44.230685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.688 [2024-11-20 18:27:44.230776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:25.688 [2024-11-20 18:27:44.230821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.451 ms 00:18:25.688 [2024-11-20 18:27:44.230841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.688 [2024-11-20 18:27:44.248217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.688 [2024-11-20 18:27:44.248304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:25.688 [2024-11-20 18:27:44.248360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.336 ms 00:18:25.688 [2024-11-20 18:27:44.248376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.688 [2024-11-20 18:27:44.248857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.688 [2024-11-20 18:27:44.248926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:25.688 [2024-11-20 18:27:44.249180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.441 ms 00:18:25.688 [2024-11-20 18:27:44.249218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.688 [2024-11-20 18:27:44.305220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.688 [2024-11-20 18:27:44.305318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:25.688 [2024-11-20 18:27:44.305371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.896 ms 00:18:25.688 [2024-11-20 18:27:44.305395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.946 [2024-11-20 18:27:44.324252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.946 [2024-11-20 18:27:44.324345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:25.946 [2024-11-20 18:27:44.324390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.773 ms 00:18:25.946 [2024-11-20 18:27:44.324408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.946 [2024-11-20 18:27:44.342267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.946 [2024-11-20 18:27:44.342359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:25.946 [2024-11-20 18:27:44.342419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.814 ms 00:18:25.946 [2024-11-20 18:27:44.342437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.946 [2024-11-20 18:27:44.360352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.946 [2024-11-20 18:27:44.360445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:25.946 [2024-11-20 18:27:44.360497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.874 ms 00:18:25.946 [2024-11-20 18:27:44.360515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.946 [2024-11-20 18:27:44.360558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.946 [2024-11-20 18:27:44.360579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:25.946 [2024-11-20 18:27:44.360599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:25.946 [2024-11-20 18:27:44.360618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.946 [2024-11-20 18:27:44.360703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.946 [2024-11-20 18:27:44.360771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:25.946 [2024-11-20 18:27:44.360796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:18:25.946 [2024-11-20 18:27:44.360812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.946 [2024-11-20 18:27:44.361852] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2477.642 ms, result 0 00:18:25.946 { 00:18:25.946 "name": "ftl0", 00:18:25.946 "uuid": "67443721-24d1-4037-960f-7a0df8d4deb7" 00:18:25.946 } 00:18:25.946 18:27:44 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:18:25.946 18:27:44 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:18:25.946 18:27:44 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:25.946 18:27:44 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:18:25.946 18:27:44 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:25.946 18:27:44 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:25.946 18:27:44 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:26.205 18:27:44 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:18:26.205 [ 00:18:26.205 { 00:18:26.205 "name": "ftl0", 00:18:26.205 "aliases": [ 00:18:26.205 "67443721-24d1-4037-960f-7a0df8d4deb7" 00:18:26.205 ], 00:18:26.205 "product_name": "FTL disk", 00:18:26.205 "block_size": 4096, 00:18:26.205 "num_blocks": 20971520, 00:18:26.205 "uuid": "67443721-24d1-4037-960f-7a0df8d4deb7", 00:18:26.205 "assigned_rate_limits": { 00:18:26.205 "rw_ios_per_sec": 0, 00:18:26.205 "rw_mbytes_per_sec": 0, 00:18:26.205 "r_mbytes_per_sec": 0, 00:18:26.205 "w_mbytes_per_sec": 0 00:18:26.205 }, 00:18:26.205 "claimed": false, 00:18:26.205 "zoned": false, 00:18:26.205 "supported_io_types": { 00:18:26.205 "read": true, 00:18:26.205 "write": true, 00:18:26.205 "unmap": true, 00:18:26.205 "flush": true, 00:18:26.205 "reset": false, 00:18:26.205 "nvme_admin": false, 00:18:26.205 "nvme_io": false, 00:18:26.205 "nvme_io_md": false, 00:18:26.205 "write_zeroes": true, 00:18:26.205 "zcopy": false, 00:18:26.205 "get_zone_info": false, 00:18:26.205 "zone_management": false, 00:18:26.205 "zone_append": false, 00:18:26.205 "compare": false, 00:18:26.205 "compare_and_write": false, 00:18:26.205 "abort": false, 00:18:26.205 "seek_hole": false, 00:18:26.205 "seek_data": false, 00:18:26.205 "copy": false, 00:18:26.205 "nvme_iov_md": false 00:18:26.205 }, 00:18:26.205 "driver_specific": { 00:18:26.205 "ftl": { 00:18:26.205 "base_bdev": "e1358bb1-eda4-4c8a-8dd0-5b3cd23f935d", 00:18:26.205 "cache": "nvc0n1p0" 00:18:26.205 } 00:18:26.205 } 00:18:26.205 } 00:18:26.205 ] 00:18:26.205 18:27:44 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:18:26.205 18:27:44 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:18:26.205 18:27:44 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:18:26.463 18:27:44 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:18:26.463 18:27:44 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:26.723 [2024-11-20 18:27:45.182396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.723 [2024-11-20 18:27:45.182430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:26.723 [2024-11-20 18:27:45.182441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:26.723 [2024-11-20 18:27:45.182450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.723 [2024-11-20 18:27:45.182477] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:26.723 [2024-11-20 18:27:45.184745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.723 [2024-11-20 18:27:45.184769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:26.723 [2024-11-20 18:27:45.184780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.251 ms 00:18:26.723 [2024-11-20 18:27:45.184787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.723 [2024-11-20 18:27:45.185221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.723 [2024-11-20 18:27:45.185234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:26.723 [2024-11-20 18:27:45.185243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.401 ms 00:18:26.723 [2024-11-20 18:27:45.185249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.723 [2024-11-20 18:27:45.187685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.723 [2024-11-20 18:27:45.187702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:26.723 [2024-11-20 18:27:45.187711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.418 ms 00:18:26.723 [2024-11-20 18:27:45.187718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.723 [2024-11-20 18:27:45.192383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.723 [2024-11-20 18:27:45.192404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:26.723 [2024-11-20 18:27:45.192414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.642 ms 00:18:26.723 [2024-11-20 18:27:45.192421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.723 [2024-11-20 18:27:45.210705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.723 [2024-11-20 18:27:45.210835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:26.723 [2024-11-20 18:27:45.210852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.229 ms 00:18:26.723 [2024-11-20 18:27:45.210858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.723 [2024-11-20 18:27:45.222946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.723 [2024-11-20 18:27:45.222972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:26.723 [2024-11-20 18:27:45.222985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.042 ms 00:18:26.723 [2024-11-20 18:27:45.222993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.723 [2024-11-20 18:27:45.223177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.723 [2024-11-20 18:27:45.223189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:26.723 [2024-11-20 18:27:45.223199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:18:26.723 [2024-11-20 18:27:45.223205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.723 [2024-11-20 18:27:45.241041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.723 [2024-11-20 18:27:45.241065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:26.723 [2024-11-20 18:27:45.241076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.808 ms 00:18:26.723 [2024-11-20 18:27:45.241082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.723 [2024-11-20 18:27:45.258993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.723 [2024-11-20 18:27:45.259017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:26.723 [2024-11-20 18:27:45.259027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.860 ms 00:18:26.723 [2024-11-20 18:27:45.259033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.723 [2024-11-20 18:27:45.276228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.723 [2024-11-20 18:27:45.276324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:26.723 [2024-11-20 18:27:45.276340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.159 ms 00:18:26.723 [2024-11-20 18:27:45.276346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.723 [2024-11-20 18:27:45.293516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.723 [2024-11-20 18:27:45.293539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:26.723 [2024-11-20 18:27:45.293549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.091 ms 00:18:26.723 [2024-11-20 18:27:45.293554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.723 [2024-11-20 18:27:45.293589] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:26.723 [2024-11-20 18:27:45.293601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:26.723 [2024-11-20 18:27:45.293610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:26.723 [2024-11-20 18:27:45.293617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:26.723 [2024-11-20 18:27:45.293625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:26.723 [2024-11-20 18:27:45.293630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:26.723 [2024-11-20 18:27:45.293638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:26.723 [2024-11-20 18:27:45.293643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:26.723 [2024-11-20 18:27:45.293653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:26.723 [2024-11-20 18:27:45.293659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:26.723 [2024-11-20 18:27:45.293666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:26.723 [2024-11-20 18:27:45.293672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:26.723 [2024-11-20 18:27:45.293679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:26.723 [2024-11-20 18:27:45.293685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:26.724 [2024-11-20 18:27:45.293691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:26.724 [2024-11-20 18:27:45.293697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:26.724 [2024-11-20 18:27:45.293704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:26.724 [2024-11-20 18:27:45.293710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:26.724 [2024-11-20 18:27:45.293718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:26.724 [2024-11-20 18:27:45.293723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:26.724 [2024-11-20 18:27:45.293732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:26.724 [2024-11-20 18:27:45.293737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:26.724 [2024-11-20 18:27:45.293744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:26.724 [2024-11-20 18:27:45.293750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:26.724 [2024-11-20 18:27:45.293758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:26.724 [2024-11-20 18:27:45.293764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:26.724 [2024-11-20 18:27:45.293771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:26.724 [2024-11-20 18:27:45.293778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:26.724 [2024-11-20 18:27:45.293785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:26.724 [2024-11-20 18:27:45.293790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:26.724 [2024-11-20 18:27:45.293798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:26.724 [2024-11-20 18:27:45.293803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:26.724 [2024-11-20 18:27:45.293810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:26.724 [2024-11-20 18:27:45.293816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:26.724 [2024-11-20 18:27:45.293823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:26.724 [2024-11-20 18:27:45.293829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:26.724 [2024-11-20 18:27:45.293836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:26.724 [2024-11-20 18:27:45.293842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:26.724 [2024-11-20 18:27:45.293849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:26.724 [2024-11-20 18:27:45.293854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:26.724 [2024-11-20 18:27:45.293863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:26.724 [2024-11-20 18:27:45.293869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:26.724 [2024-11-20 18:27:45.293876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:26.724 [2024-11-20 18:27:45.293882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:26.724 [2024-11-20 18:27:45.293893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:26.724 [2024-11-20 18:27:45.293899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:26.724 [2024-11-20 18:27:45.293907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:26.724 [2024-11-20 18:27:45.293913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:26.724 [2024-11-20 18:27:45.293920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:26.724 [2024-11-20 18:27:45.293926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:26.724 [2024-11-20 18:27:45.293933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:26.724 [2024-11-20 18:27:45.293939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:26.724 [2024-11-20 18:27:45.293946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:26.724 [2024-11-20 18:27:45.293952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:26.724 [2024-11-20 18:27:45.293959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:26.724 [2024-11-20 18:27:45.293965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:26.724 [2024-11-20 18:27:45.293974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:26.724 [2024-11-20 18:27:45.293979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:26.724 [2024-11-20 18:27:45.293987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:26.724 [2024-11-20 18:27:45.293992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:26.724 [2024-11-20 18:27:45.293999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:26.724 [2024-11-20 18:27:45.294005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:26.724 [2024-11-20 18:27:45.294012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:26.724 [2024-11-20 18:27:45.294017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:26.724 [2024-11-20 18:27:45.294025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:26.724 [2024-11-20 18:27:45.294030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:26.724 [2024-11-20 18:27:45.294037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:26.724 [2024-11-20 18:27:45.294043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:26.724 [2024-11-20 18:27:45.294049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:26.724 [2024-11-20 18:27:45.294055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:26.724 [2024-11-20 18:27:45.294062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:26.724 [2024-11-20 18:27:45.294067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:26.724 [2024-11-20 18:27:45.294078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:26.724 [2024-11-20 18:27:45.294083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:26.724 [2024-11-20 18:27:45.294090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:26.724 [2024-11-20 18:27:45.294105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:26.724 [2024-11-20 18:27:45.294115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:26.724 [2024-11-20 18:27:45.294120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:26.724 [2024-11-20 18:27:45.294128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:26.724 [2024-11-20 18:27:45.294134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:26.724 [2024-11-20 18:27:45.294142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:26.724 [2024-11-20 18:27:45.294148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:26.724 [2024-11-20 18:27:45.294156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:26.724 [2024-11-20 18:27:45.294162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:26.724 [2024-11-20 18:27:45.294181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:26.724 [2024-11-20 18:27:45.294186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:26.724 [2024-11-20 18:27:45.294194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:26.724 [2024-11-20 18:27:45.294199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:26.724 [2024-11-20 18:27:45.294208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:26.724 [2024-11-20 18:27:45.294213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:26.724 [2024-11-20 18:27:45.294220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:26.724 [2024-11-20 18:27:45.294226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:26.724 [2024-11-20 18:27:45.294233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:26.724 [2024-11-20 18:27:45.294239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:26.724 [2024-11-20 18:27:45.294246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:26.724 [2024-11-20 18:27:45.294251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:26.724 [2024-11-20 18:27:45.294258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:26.724 [2024-11-20 18:27:45.294264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:26.724 [2024-11-20 18:27:45.294273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:26.724 [2024-11-20 18:27:45.294278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:26.724 [2024-11-20 18:27:45.294285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:26.724 [2024-11-20 18:27:45.294297] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:26.724 [2024-11-20 18:27:45.294304] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 67443721-24d1-4037-960f-7a0df8d4deb7 00:18:26.724 [2024-11-20 18:27:45.294312] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:26.725 [2024-11-20 18:27:45.294320] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:26.725 [2024-11-20 18:27:45.294325] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:26.725 [2024-11-20 18:27:45.294334] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:26.725 [2024-11-20 18:27:45.294340] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:26.725 [2024-11-20 18:27:45.294348] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:26.725 [2024-11-20 18:27:45.294354] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:26.725 [2024-11-20 18:27:45.294362] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:26.725 [2024-11-20 18:27:45.294367] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:26.725 [2024-11-20 18:27:45.294375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.725 [2024-11-20 18:27:45.294381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:26.725 [2024-11-20 18:27:45.294389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.787 ms 00:18:26.725 [2024-11-20 18:27:45.294394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.725 [2024-11-20 18:27:45.304342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.725 [2024-11-20 18:27:45.304367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:26.725 [2024-11-20 18:27:45.304377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.918 ms 00:18:26.725 [2024-11-20 18:27:45.304384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.725 [2024-11-20 18:27:45.304697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.725 [2024-11-20 18:27:45.304708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:26.725 [2024-11-20 18:27:45.304716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:18:26.725 [2024-11-20 18:27:45.304723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.725 [2024-11-20 18:27:45.341136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:26.725 [2024-11-20 18:27:45.341163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:26.725 [2024-11-20 18:27:45.341174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:26.725 [2024-11-20 18:27:45.341181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.725 [2024-11-20 18:27:45.341242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:26.725 [2024-11-20 18:27:45.341249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:26.725 [2024-11-20 18:27:45.341257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:26.725 [2024-11-20 18:27:45.341263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.725 [2024-11-20 18:27:45.341334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:26.725 [2024-11-20 18:27:45.341345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:26.725 [2024-11-20 18:27:45.341356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:26.725 [2024-11-20 18:27:45.341362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.725 [2024-11-20 18:27:45.341383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:26.725 [2024-11-20 18:27:45.341390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:26.725 [2024-11-20 18:27:45.341398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:26.725 [2024-11-20 18:27:45.341404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.983 [2024-11-20 18:27:45.408277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:26.983 [2024-11-20 18:27:45.408315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:26.983 [2024-11-20 18:27:45.408327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:26.983 [2024-11-20 18:27:45.408334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.983 [2024-11-20 18:27:45.459337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:26.983 [2024-11-20 18:27:45.459512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:26.983 [2024-11-20 18:27:45.459529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:26.983 [2024-11-20 18:27:45.459536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.983 [2024-11-20 18:27:45.459628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:26.983 [2024-11-20 18:27:45.459637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:26.983 [2024-11-20 18:27:45.459645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:26.983 [2024-11-20 18:27:45.459654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.983 [2024-11-20 18:27:45.459712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:26.983 [2024-11-20 18:27:45.459719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:26.983 [2024-11-20 18:27:45.459728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:26.983 [2024-11-20 18:27:45.459733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.983 [2024-11-20 18:27:45.459826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:26.983 [2024-11-20 18:27:45.459838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:26.983 [2024-11-20 18:27:45.459846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:26.983 [2024-11-20 18:27:45.459852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.983 [2024-11-20 18:27:45.459899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:26.983 [2024-11-20 18:27:45.459907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:26.983 [2024-11-20 18:27:45.459915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:26.983 [2024-11-20 18:27:45.459921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.983 [2024-11-20 18:27:45.459965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:26.983 [2024-11-20 18:27:45.459972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:26.983 [2024-11-20 18:27:45.459980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:26.983 [2024-11-20 18:27:45.459986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.983 [2024-11-20 18:27:45.460036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:26.983 [2024-11-20 18:27:45.460043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:26.983 [2024-11-20 18:27:45.460051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:26.983 [2024-11-20 18:27:45.460057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.983 [2024-11-20 18:27:45.460224] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 277.803 ms, result 0 00:18:26.983 true 00:18:26.983 18:27:45 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 75057 00:18:26.983 18:27:45 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 75057 ']' 00:18:26.983 18:27:45 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 75057 00:18:26.983 18:27:45 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:18:26.983 18:27:45 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:26.983 18:27:45 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75057 00:18:26.983 18:27:45 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:26.983 killing process with pid 75057 00:18:26.983 18:27:45 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:26.983 18:27:45 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75057' 00:18:26.983 18:27:45 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 75057 00:18:26.983 18:27:45 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 75057 00:18:33.543 18:27:51 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:18:33.543 18:27:51 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:33.543 18:27:51 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:18:33.543 18:27:51 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:33.543 18:27:51 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:33.543 18:27:51 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:33.543 18:27:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:33.543 18:27:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:33.543 18:27:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:33.543 18:27:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:33.543 18:27:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:33.543 18:27:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:18:33.543 18:27:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:33.543 18:27:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:33.543 18:27:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:33.543 18:27:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:18:33.543 18:27:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:33.543 18:27:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:33.543 18:27:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:33.543 18:27:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:18:33.543 18:27:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:33.543 18:27:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:33.543 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:18:33.543 fio-3.35 00:18:33.543 Starting 1 thread 00:18:37.730 00:18:37.730 test: (groupid=0, jobs=1): err= 0: pid=75230: Wed Nov 20 18:27:55 2024 00:18:37.730 read: IOPS=1371, BW=91.1MiB/s (95.5MB/s)(255MiB/2795msec) 00:18:37.730 slat (nsec): min=2977, max=16493, avg=3816.18, stdev=1556.50 00:18:37.730 clat (usec): min=253, max=864, avg=331.93, stdev=39.43 00:18:37.730 lat (usec): min=256, max=874, avg=335.75, stdev=40.22 00:18:37.730 clat percentiles (usec): 00:18:37.730 | 1.00th=[ 293], 5.00th=[ 314], 10.00th=[ 314], 20.00th=[ 318], 00:18:37.730 | 30.00th=[ 322], 40.00th=[ 322], 50.00th=[ 322], 60.00th=[ 322], 00:18:37.730 | 70.00th=[ 326], 80.00th=[ 330], 90.00th=[ 347], 95.00th=[ 424], 00:18:37.730 | 99.00th=[ 502], 99.50th=[ 529], 99.90th=[ 734], 99.95th=[ 807], 00:18:37.730 | 99.99th=[ 865] 00:18:37.730 write: IOPS=1381, BW=91.7MiB/s (96.2MB/s)(256MiB/2792msec); 0 zone resets 00:18:37.730 slat (nsec): min=13722, max=61563, avg=17129.95, stdev=2538.44 00:18:37.730 clat (usec): min=295, max=1131, avg=360.36, stdev=57.34 00:18:37.730 lat (usec): min=312, max=1170, avg=377.49, stdev=58.07 00:18:37.730 clat percentiles (usec): 00:18:37.730 | 1.00th=[ 326], 5.00th=[ 338], 10.00th=[ 338], 20.00th=[ 343], 00:18:37.731 | 30.00th=[ 347], 40.00th=[ 347], 50.00th=[ 347], 60.00th=[ 351], 00:18:37.731 | 70.00th=[ 351], 80.00th=[ 359], 90.00th=[ 371], 95.00th=[ 420], 00:18:37.731 | 99.00th=[ 660], 99.50th=[ 709], 99.90th=[ 979], 99.95th=[ 1037], 00:18:37.731 | 99.99th=[ 1139] 00:18:37.731 bw ( KiB/s): min=90712, max=97240, per=100.00%, avg=94057.60, stdev=2335.08, samples=5 00:18:37.731 iops : min= 1334, max= 1430, avg=1383.20, stdev=34.34, samples=5 00:18:37.731 lat (usec) : 500=97.91%, 750=1.87%, 1000=0.18% 00:18:37.731 lat (msec) : 2=0.04% 00:18:37.731 cpu : usr=99.32%, sys=0.04%, ctx=3, majf=0, minf=1169 00:18:37.731 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:37.731 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:37.731 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:37.731 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:37.731 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:37.731 00:18:37.731 Run status group 0 (all jobs): 00:18:37.731 READ: bw=91.1MiB/s (95.5MB/s), 91.1MiB/s-91.1MiB/s (95.5MB/s-95.5MB/s), io=255MiB (267MB), run=2795-2795msec 00:18:37.731 WRITE: bw=91.7MiB/s (96.2MB/s), 91.7MiB/s-91.7MiB/s (96.2MB/s-96.2MB/s), io=256MiB (269MB), run=2792-2792msec 00:18:38.302 ----------------------------------------------------- 00:18:38.302 Suppressions used: 00:18:38.302 count bytes template 00:18:38.302 1 5 /usr/src/fio/parse.c 00:18:38.302 1 8 libtcmalloc_minimal.so 00:18:38.302 1 904 libcrypto.so 00:18:38.302 ----------------------------------------------------- 00:18:38.302 00:18:38.302 18:27:56 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:18:38.302 18:27:56 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:38.302 18:27:56 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:38.302 18:27:56 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:38.302 18:27:56 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:18:38.302 18:27:56 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:38.302 18:27:56 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:38.302 18:27:56 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:38.302 18:27:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:38.302 18:27:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:38.302 18:27:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:38.302 18:27:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:38.302 18:27:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:38.302 18:27:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:18:38.302 18:27:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:38.302 18:27:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:38.302 18:27:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:18:38.302 18:27:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:38.302 18:27:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:38.302 18:27:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:38.302 18:27:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:38.302 18:27:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:18:38.302 18:27:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:38.302 18:27:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:38.561 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:38.561 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:38.561 fio-3.35 00:18:38.561 Starting 2 threads 00:19:05.124 00:19:05.124 first_half: (groupid=0, jobs=1): err= 0: pid=75318: Wed Nov 20 18:28:23 2024 00:19:05.124 read: IOPS=2602, BW=10.2MiB/s (10.7MB/s)(254MiB/25012msec) 00:19:05.124 slat (nsec): min=2979, max=52900, avg=4693.09, stdev=1056.24 00:19:05.124 clat (usec): min=631, max=402768, avg=33842.40, stdev=16979.34 00:19:05.124 lat (usec): min=635, max=402773, avg=33847.09, stdev=16979.39 00:19:05.124 clat percentiles (msec): 00:19:05.124 | 1.00th=[ 3], 5.00th=[ 28], 10.00th=[ 30], 20.00th=[ 31], 00:19:05.124 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 32], 00:19:05.124 | 70.00th=[ 34], 80.00th=[ 37], 90.00th=[ 40], 95.00th=[ 43], 00:19:05.124 | 99.00th=[ 100], 99.50th=[ 138], 99.90th=[ 300], 99.95th=[ 334], 00:19:05.124 | 99.99th=[ 380] 00:19:05.124 write: IOPS=4568, BW=17.8MiB/s (18.7MB/s)(256MiB/14344msec); 0 zone resets 00:19:05.124 slat (usec): min=3, max=2819, avg= 6.38, stdev=13.05 00:19:05.124 clat (usec): min=397, max=103555, avg=15216.98, stdev=25632.66 00:19:05.124 lat (usec): min=404, max=103566, avg=15223.36, stdev=25632.62 00:19:05.124 clat percentiles (usec): 00:19:05.124 | 1.00th=[ 685], 5.00th=[ 816], 10.00th=[ 955], 20.00th=[ 1106], 00:19:05.124 | 30.00th=[ 1270], 40.00th=[ 1565], 50.00th=[ 2040], 60.00th=[ 4686], 00:19:05.124 | 70.00th=[ 10552], 80.00th=[ 17171], 90.00th=[ 65799], 95.00th=[ 79168], 00:19:05.124 | 99.00th=[ 92799], 99.50th=[ 94897], 99.90th=[ 99091], 99.95th=[101188], 00:19:05.124 | 99.99th=[102237] 00:19:05.124 bw ( KiB/s): min= 760, max=59496, per=75.00%, avg=20969.16, stdev=15247.50, samples=25 00:19:05.124 iops : min= 190, max=14874, avg=5242.28, stdev=3811.87, samples=25 00:19:05.124 lat (usec) : 500=0.01%, 750=1.44%, 1000=4.89% 00:19:05.124 lat (msec) : 2=18.67%, 4=5.19%, 10=5.59%, 20=7.16%, 50=48.22% 00:19:05.124 lat (msec) : 100=8.30%, 250=0.46%, 500=0.07% 00:19:05.124 cpu : usr=99.28%, sys=0.13%, ctx=53, majf=0, minf=5547 00:19:05.124 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:19:05.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.124 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:05.124 issued rwts: total=65091,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:05.124 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:05.124 second_half: (groupid=0, jobs=1): err= 0: pid=75319: Wed Nov 20 18:28:23 2024 00:19:05.124 read: IOPS=2585, BW=10.1MiB/s (10.6MB/s)(255MiB/25204msec) 00:19:05.124 slat (nsec): min=3006, max=48975, avg=4443.96, stdev=1132.81 00:19:05.124 clat (usec): min=588, max=407242, avg=33177.02, stdev=15388.14 00:19:05.124 lat (usec): min=595, max=407247, avg=33181.47, stdev=15388.20 00:19:05.124 clat percentiles (msec): 00:19:05.124 | 1.00th=[ 4], 5.00th=[ 24], 10.00th=[ 30], 20.00th=[ 31], 00:19:05.124 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 32], 00:19:05.124 | 70.00th=[ 34], 80.00th=[ 37], 90.00th=[ 40], 95.00th=[ 43], 00:19:05.124 | 99.00th=[ 86], 99.50th=[ 129], 99.90th=[ 262], 99.95th=[ 309], 00:19:05.124 | 99.99th=[ 397] 00:19:05.124 write: IOPS=3494, BW=13.7MiB/s (14.3MB/s)(256MiB/18753msec); 0 zone resets 00:19:05.124 slat (usec): min=3, max=2548, avg= 7.11, stdev=11.63 00:19:05.124 clat (usec): min=362, max=104703, avg=16218.84, stdev=25813.44 00:19:05.124 lat (usec): min=370, max=104710, avg=16225.95, stdev=25813.63 00:19:05.124 clat percentiles (usec): 00:19:05.124 | 1.00th=[ 676], 5.00th=[ 799], 10.00th=[ 947], 20.00th=[ 1172], 00:19:05.124 | 30.00th=[ 1549], 40.00th=[ 2311], 50.00th=[ 4555], 60.00th=[ 5604], 00:19:05.124 | 70.00th=[ 11338], 80.00th=[ 18744], 90.00th=[ 66323], 95.00th=[ 80217], 00:19:05.124 | 99.00th=[ 93848], 99.50th=[ 95945], 99.90th=[101188], 99.95th=[102237], 00:19:05.124 | 99.99th=[103285] 00:19:05.124 bw ( KiB/s): min= 232, max=36416, per=66.97%, avg=18724.57, stdev=10102.42, samples=28 00:19:05.124 iops : min= 58, max= 9104, avg=4681.14, stdev=2525.60, samples=28 00:19:05.124 lat (usec) : 500=0.01%, 750=1.66%, 1000=4.50% 00:19:05.124 lat (msec) : 2=12.45%, 4=5.85%, 10=9.96%, 20=8.60%, 50=48.15% 00:19:05.124 lat (msec) : 100=8.42%, 250=0.35%, 500=0.06% 00:19:05.124 cpu : usr=99.38%, sys=0.11%, ctx=36, majf=0, minf=5564 00:19:05.124 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:19:05.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.124 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:05.124 issued rwts: total=65160,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:05.124 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:05.124 00:19:05.124 Run status group 0 (all jobs): 00:19:05.124 READ: bw=20.2MiB/s (21.2MB/s), 10.1MiB/s-10.2MiB/s (10.6MB/s-10.7MB/s), io=509MiB (534MB), run=25012-25204msec 00:19:05.124 WRITE: bw=27.3MiB/s (28.6MB/s), 13.7MiB/s-17.8MiB/s (14.3MB/s-18.7MB/s), io=512MiB (537MB), run=14344-18753msec 00:19:06.509 ----------------------------------------------------- 00:19:06.509 Suppressions used: 00:19:06.509 count bytes template 00:19:06.509 2 10 /usr/src/fio/parse.c 00:19:06.509 2 192 /usr/src/fio/iolog.c 00:19:06.509 1 8 libtcmalloc_minimal.so 00:19:06.509 1 904 libcrypto.so 00:19:06.509 ----------------------------------------------------- 00:19:06.509 00:19:06.509 18:28:24 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:19:06.509 18:28:24 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:06.509 18:28:24 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:06.509 18:28:24 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:19:06.509 18:28:24 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:19:06.509 18:28:24 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:06.509 18:28:24 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:06.509 18:28:24 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:19:06.509 18:28:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:19:06.509 18:28:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:06.509 18:28:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:06.509 18:28:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:06.509 18:28:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:06.509 18:28:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:19:06.509 18:28:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:06.509 18:28:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:06.509 18:28:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:19:06.509 18:28:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:06.509 18:28:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:06.509 18:28:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:06.509 18:28:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:06.509 18:28:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:19:06.509 18:28:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:06.509 18:28:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:19:06.770 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:19:06.770 fio-3.35 00:19:06.770 Starting 1 thread 00:19:24.881 00:19:24.882 test: (groupid=0, jobs=1): err= 0: pid=75643: Wed Nov 20 18:28:40 2024 00:19:24.882 read: IOPS=7462, BW=29.2MiB/s (30.6MB/s)(255MiB/8737msec) 00:19:24.882 slat (nsec): min=3027, max=33895, avg=4790.96, stdev=1170.62 00:19:24.882 clat (usec): min=541, max=40873, avg=17143.68, stdev=2610.03 00:19:24.882 lat (usec): min=546, max=40877, avg=17148.47, stdev=2610.05 00:19:24.882 clat percentiles (usec): 00:19:24.882 | 1.00th=[14877], 5.00th=[15139], 10.00th=[15270], 20.00th=[15533], 00:19:24.882 | 30.00th=[15926], 40.00th=[16057], 50.00th=[16319], 60.00th=[16581], 00:19:24.882 | 70.00th=[16909], 80.00th=[18220], 90.00th=[20317], 95.00th=[22414], 00:19:24.882 | 99.00th=[27395], 99.50th=[30802], 99.90th=[34341], 99.95th=[36963], 00:19:24.882 | 99.99th=[40109] 00:19:24.882 write: IOPS=11.8k, BW=46.0MiB/s (48.3MB/s)(256MiB/5560msec); 0 zone resets 00:19:24.882 slat (usec): min=4, max=247, avg= 6.59, stdev= 3.14 00:19:24.882 clat (usec): min=530, max=57223, avg=10810.62, stdev=13283.78 00:19:24.882 lat (usec): min=537, max=57230, avg=10817.21, stdev=13283.79 00:19:24.882 clat percentiles (usec): 00:19:24.882 | 1.00th=[ 766], 5.00th=[ 1020], 10.00th=[ 1156], 20.00th=[ 1352], 00:19:24.882 | 30.00th=[ 1582], 40.00th=[ 2114], 50.00th=[ 6915], 60.00th=[ 8586], 00:19:24.882 | 70.00th=[10421], 80.00th=[12649], 90.00th=[37487], 95.00th=[40633], 00:19:24.882 | 99.00th=[49021], 99.50th=[49546], 99.90th=[52691], 99.95th=[53740], 00:19:24.882 | 99.99th=[56361] 00:19:24.882 bw ( KiB/s): min= 4296, max=73624, per=92.67%, avg=43690.67, stdev=15550.98, samples=12 00:19:24.882 iops : min= 1074, max=18406, avg=10922.67, stdev=3887.75, samples=12 00:19:24.882 lat (usec) : 750=0.39%, 1000=1.85% 00:19:24.882 lat (msec) : 2=17.28%, 4=1.54%, 10=12.79%, 20=52.58%, 50=13.36% 00:19:24.882 lat (msec) : 100=0.21% 00:19:24.882 cpu : usr=99.03%, sys=0.20%, ctx=28, majf=0, minf=5565 00:19:24.882 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:19:24.882 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:24.882 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:24.882 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:24.882 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:24.882 00:19:24.882 Run status group 0 (all jobs): 00:19:24.882 READ: bw=29.2MiB/s (30.6MB/s), 29.2MiB/s-29.2MiB/s (30.6MB/s-30.6MB/s), io=255MiB (267MB), run=8737-8737msec 00:19:24.882 WRITE: bw=46.0MiB/s (48.3MB/s), 46.0MiB/s-46.0MiB/s (48.3MB/s-48.3MB/s), io=256MiB (268MB), run=5560-5560msec 00:19:24.882 ----------------------------------------------------- 00:19:24.882 Suppressions used: 00:19:24.882 count bytes template 00:19:24.882 1 5 /usr/src/fio/parse.c 00:19:24.882 2 192 /usr/src/fio/iolog.c 00:19:24.882 1 8 libtcmalloc_minimal.so 00:19:24.882 1 904 libcrypto.so 00:19:24.882 ----------------------------------------------------- 00:19:24.882 00:19:24.882 18:28:42 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:19:24.882 18:28:42 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:24.882 18:28:42 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:24.882 18:28:42 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:24.882 Remove shared memory files 00:19:24.882 18:28:42 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:19:24.882 18:28:42 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:19:24.882 18:28:42 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:19:24.882 18:28:42 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:19:24.882 18:28:42 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57100 /dev/shm/spdk_tgt_trace.pid73977 00:19:24.882 18:28:42 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:19:24.882 18:28:42 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:19:24.882 ************************************ 00:19:24.882 END TEST ftl_fio_basic 00:19:24.882 ************************************ 00:19:24.882 00:19:24.882 real 1m4.192s 00:19:24.882 user 2m12.542s 00:19:24.882 sys 0m12.831s 00:19:24.882 18:28:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:24.882 18:28:42 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:24.882 18:28:42 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:19:24.882 18:28:42 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:24.882 18:28:42 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:24.882 18:28:42 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:24.882 ************************************ 00:19:24.882 START TEST ftl_bdevperf 00:19:24.882 ************************************ 00:19:24.882 18:28:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:19:24.882 * Looking for test storage... 00:19:24.882 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:24.882 18:28:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:24.882 18:28:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:19:24.882 18:28:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:24.882 18:28:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:24.882 18:28:42 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:24.882 18:28:42 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:24.882 18:28:42 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:24.882 18:28:42 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:19:24.882 18:28:42 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:19:24.882 18:28:42 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:19:24.882 18:28:42 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:19:24.882 18:28:42 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:19:24.882 18:28:42 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:19:24.882 18:28:42 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:19:24.882 18:28:42 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:24.882 18:28:42 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:19:24.882 18:28:42 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:19:24.882 18:28:42 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:24.882 18:28:42 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:24.882 18:28:42 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:19:24.882 18:28:42 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:19:24.882 18:28:42 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:24.882 18:28:42 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:19:24.882 18:28:42 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:19:24.882 18:28:42 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:19:24.882 18:28:42 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:19:24.882 18:28:42 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:24.882 18:28:42 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:19:24.882 18:28:42 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:19:24.882 18:28:42 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:24.882 18:28:42 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:24.882 18:28:42 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:19:24.882 18:28:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:24.882 18:28:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:24.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:24.882 --rc genhtml_branch_coverage=1 00:19:24.882 --rc genhtml_function_coverage=1 00:19:24.882 --rc genhtml_legend=1 00:19:24.882 --rc geninfo_all_blocks=1 00:19:24.882 --rc geninfo_unexecuted_blocks=1 00:19:24.882 00:19:24.882 ' 00:19:24.882 18:28:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:24.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:24.882 --rc genhtml_branch_coverage=1 00:19:24.882 --rc genhtml_function_coverage=1 00:19:24.882 --rc genhtml_legend=1 00:19:24.882 --rc geninfo_all_blocks=1 00:19:24.882 --rc geninfo_unexecuted_blocks=1 00:19:24.882 00:19:24.882 ' 00:19:24.882 18:28:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:24.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:24.882 --rc genhtml_branch_coverage=1 00:19:24.882 --rc genhtml_function_coverage=1 00:19:24.882 --rc genhtml_legend=1 00:19:24.882 --rc geninfo_all_blocks=1 00:19:24.882 --rc geninfo_unexecuted_blocks=1 00:19:24.882 00:19:24.882 ' 00:19:24.882 18:28:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:24.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:24.882 --rc genhtml_branch_coverage=1 00:19:24.882 --rc genhtml_function_coverage=1 00:19:24.882 --rc genhtml_legend=1 00:19:24.882 --rc geninfo_all_blocks=1 00:19:24.882 --rc geninfo_unexecuted_blocks=1 00:19:24.882 00:19:24.882 ' 00:19:24.882 18:28:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:24.882 18:28:42 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:19:24.882 18:28:42 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:24.882 18:28:42 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:24.882 18:28:42 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:24.882 18:28:42 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:24.882 18:28:42 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:24.882 18:28:42 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:24.882 18:28:42 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:24.882 18:28:42 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:24.882 18:28:42 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:24.882 18:28:42 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:24.882 18:28:42 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:24.883 18:28:42 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:24.883 18:28:42 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:24.883 18:28:42 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:24.883 18:28:42 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:24.883 18:28:42 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:24.883 18:28:42 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:24.883 18:28:42 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:24.883 18:28:42 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:24.883 18:28:42 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:24.883 18:28:42 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:24.883 18:28:42 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:24.883 18:28:42 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:24.883 18:28:42 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:24.883 18:28:42 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:24.883 18:28:42 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:24.883 18:28:42 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:24.883 18:28:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:19:24.883 18:28:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:19:24.883 18:28:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:19:24.883 18:28:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:24.883 18:28:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:19:24.883 18:28:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=75891 00:19:24.883 18:28:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:19:24.883 18:28:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:19:24.883 18:28:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 75891 00:19:24.883 18:28:42 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 75891 ']' 00:19:24.883 18:28:42 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:24.883 18:28:42 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:24.883 18:28:42 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:24.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:24.883 18:28:42 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:24.883 18:28:42 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:24.883 [2024-11-20 18:28:42.616043] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:19:24.883 [2024-11-20 18:28:42.616335] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75891 ] 00:19:24.883 [2024-11-20 18:28:42.780351] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:24.883 [2024-11-20 18:28:42.899472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:24.883 18:28:43 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:24.883 18:28:43 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:19:24.883 18:28:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:24.883 18:28:43 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:19:24.883 18:28:43 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:24.883 18:28:43 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:19:24.883 18:28:43 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:19:24.883 18:28:43 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:25.454 18:28:43 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:25.454 18:28:43 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:19:25.454 18:28:43 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:25.454 18:28:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:19:25.454 18:28:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:25.454 18:28:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:25.454 18:28:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:25.454 18:28:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:25.454 18:28:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:25.454 { 00:19:25.454 "name": "nvme0n1", 00:19:25.454 "aliases": [ 00:19:25.454 "6d525d1b-dd6f-43c4-8132-8c72b3865c0d" 00:19:25.454 ], 00:19:25.454 "product_name": "NVMe disk", 00:19:25.454 "block_size": 4096, 00:19:25.454 "num_blocks": 1310720, 00:19:25.454 "uuid": "6d525d1b-dd6f-43c4-8132-8c72b3865c0d", 00:19:25.454 "numa_id": -1, 00:19:25.454 "assigned_rate_limits": { 00:19:25.454 "rw_ios_per_sec": 0, 00:19:25.454 "rw_mbytes_per_sec": 0, 00:19:25.454 "r_mbytes_per_sec": 0, 00:19:25.454 "w_mbytes_per_sec": 0 00:19:25.454 }, 00:19:25.454 "claimed": true, 00:19:25.454 "claim_type": "read_many_write_one", 00:19:25.454 "zoned": false, 00:19:25.454 "supported_io_types": { 00:19:25.454 "read": true, 00:19:25.454 "write": true, 00:19:25.454 "unmap": true, 00:19:25.454 "flush": true, 00:19:25.454 "reset": true, 00:19:25.454 "nvme_admin": true, 00:19:25.454 "nvme_io": true, 00:19:25.454 "nvme_io_md": false, 00:19:25.454 "write_zeroes": true, 00:19:25.454 "zcopy": false, 00:19:25.454 "get_zone_info": false, 00:19:25.454 "zone_management": false, 00:19:25.454 "zone_append": false, 00:19:25.454 "compare": true, 00:19:25.454 "compare_and_write": false, 00:19:25.454 "abort": true, 00:19:25.454 "seek_hole": false, 00:19:25.454 "seek_data": false, 00:19:25.454 "copy": true, 00:19:25.454 "nvme_iov_md": false 00:19:25.454 }, 00:19:25.454 "driver_specific": { 00:19:25.454 "nvme": [ 00:19:25.454 { 00:19:25.454 "pci_address": "0000:00:11.0", 00:19:25.454 "trid": { 00:19:25.454 "trtype": "PCIe", 00:19:25.454 "traddr": "0000:00:11.0" 00:19:25.454 }, 00:19:25.454 "ctrlr_data": { 00:19:25.454 "cntlid": 0, 00:19:25.454 "vendor_id": "0x1b36", 00:19:25.454 "model_number": "QEMU NVMe Ctrl", 00:19:25.454 "serial_number": "12341", 00:19:25.454 "firmware_revision": "8.0.0", 00:19:25.454 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:25.454 "oacs": { 00:19:25.454 "security": 0, 00:19:25.454 "format": 1, 00:19:25.454 "firmware": 0, 00:19:25.454 "ns_manage": 1 00:19:25.454 }, 00:19:25.454 "multi_ctrlr": false, 00:19:25.454 "ana_reporting": false 00:19:25.454 }, 00:19:25.454 "vs": { 00:19:25.454 "nvme_version": "1.4" 00:19:25.454 }, 00:19:25.454 "ns_data": { 00:19:25.454 "id": 1, 00:19:25.454 "can_share": false 00:19:25.454 } 00:19:25.454 } 00:19:25.454 ], 00:19:25.454 "mp_policy": "active_passive" 00:19:25.454 } 00:19:25.454 } 00:19:25.454 ]' 00:19:25.454 18:28:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:25.454 18:28:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:25.454 18:28:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:25.454 18:28:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:19:25.454 18:28:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:19:25.454 18:28:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:19:25.454 18:28:44 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:19:25.454 18:28:44 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:25.454 18:28:44 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:19:25.454 18:28:44 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:25.455 18:28:44 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:25.715 18:28:44 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=78d35d9b-3585-4a66-bf8d-aa5cbe024f8b 00:19:25.715 18:28:44 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:19:25.715 18:28:44 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 78d35d9b-3585-4a66-bf8d-aa5cbe024f8b 00:19:25.976 18:28:44 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:26.235 18:28:44 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=db6313c3-71a8-4324-a388-a01b7008b3bf 00:19:26.235 18:28:44 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u db6313c3-71a8-4324-a388-a01b7008b3bf 00:19:26.493 18:28:44 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=5601283d-559b-43ae-8510-0e821d5bd4aa 00:19:26.493 18:28:44 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 5601283d-559b-43ae-8510-0e821d5bd4aa 00:19:26.493 18:28:44 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:19:26.493 18:28:44 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:26.493 18:28:44 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=5601283d-559b-43ae-8510-0e821d5bd4aa 00:19:26.493 18:28:44 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:19:26.493 18:28:44 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 5601283d-559b-43ae-8510-0e821d5bd4aa 00:19:26.493 18:28:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=5601283d-559b-43ae-8510-0e821d5bd4aa 00:19:26.493 18:28:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:26.493 18:28:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:26.493 18:28:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:26.493 18:28:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5601283d-559b-43ae-8510-0e821d5bd4aa 00:19:26.493 18:28:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:26.493 { 00:19:26.493 "name": "5601283d-559b-43ae-8510-0e821d5bd4aa", 00:19:26.493 "aliases": [ 00:19:26.493 "lvs/nvme0n1p0" 00:19:26.493 ], 00:19:26.493 "product_name": "Logical Volume", 00:19:26.493 "block_size": 4096, 00:19:26.493 "num_blocks": 26476544, 00:19:26.493 "uuid": "5601283d-559b-43ae-8510-0e821d5bd4aa", 00:19:26.493 "assigned_rate_limits": { 00:19:26.493 "rw_ios_per_sec": 0, 00:19:26.493 "rw_mbytes_per_sec": 0, 00:19:26.493 "r_mbytes_per_sec": 0, 00:19:26.493 "w_mbytes_per_sec": 0 00:19:26.493 }, 00:19:26.493 "claimed": false, 00:19:26.493 "zoned": false, 00:19:26.493 "supported_io_types": { 00:19:26.493 "read": true, 00:19:26.493 "write": true, 00:19:26.493 "unmap": true, 00:19:26.493 "flush": false, 00:19:26.493 "reset": true, 00:19:26.493 "nvme_admin": false, 00:19:26.493 "nvme_io": false, 00:19:26.494 "nvme_io_md": false, 00:19:26.494 "write_zeroes": true, 00:19:26.494 "zcopy": false, 00:19:26.494 "get_zone_info": false, 00:19:26.494 "zone_management": false, 00:19:26.494 "zone_append": false, 00:19:26.494 "compare": false, 00:19:26.494 "compare_and_write": false, 00:19:26.494 "abort": false, 00:19:26.494 "seek_hole": true, 00:19:26.494 "seek_data": true, 00:19:26.494 "copy": false, 00:19:26.494 "nvme_iov_md": false 00:19:26.494 }, 00:19:26.494 "driver_specific": { 00:19:26.494 "lvol": { 00:19:26.494 "lvol_store_uuid": "db6313c3-71a8-4324-a388-a01b7008b3bf", 00:19:26.494 "base_bdev": "nvme0n1", 00:19:26.494 "thin_provision": true, 00:19:26.494 "num_allocated_clusters": 0, 00:19:26.494 "snapshot": false, 00:19:26.494 "clone": false, 00:19:26.494 "esnap_clone": false 00:19:26.494 } 00:19:26.494 } 00:19:26.494 } 00:19:26.494 ]' 00:19:26.494 18:28:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:26.494 18:28:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:26.494 18:28:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:26.752 18:28:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:26.752 18:28:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:26.752 18:28:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:19:26.752 18:28:45 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:19:26.752 18:28:45 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:19:26.752 18:28:45 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:27.011 18:28:45 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:27.011 18:28:45 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:27.011 18:28:45 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 5601283d-559b-43ae-8510-0e821d5bd4aa 00:19:27.011 18:28:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=5601283d-559b-43ae-8510-0e821d5bd4aa 00:19:27.011 18:28:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:27.011 18:28:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:27.011 18:28:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:27.011 18:28:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5601283d-559b-43ae-8510-0e821d5bd4aa 00:19:27.011 18:28:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:27.011 { 00:19:27.011 "name": "5601283d-559b-43ae-8510-0e821d5bd4aa", 00:19:27.011 "aliases": [ 00:19:27.011 "lvs/nvme0n1p0" 00:19:27.011 ], 00:19:27.011 "product_name": "Logical Volume", 00:19:27.011 "block_size": 4096, 00:19:27.011 "num_blocks": 26476544, 00:19:27.011 "uuid": "5601283d-559b-43ae-8510-0e821d5bd4aa", 00:19:27.011 "assigned_rate_limits": { 00:19:27.011 "rw_ios_per_sec": 0, 00:19:27.011 "rw_mbytes_per_sec": 0, 00:19:27.011 "r_mbytes_per_sec": 0, 00:19:27.011 "w_mbytes_per_sec": 0 00:19:27.011 }, 00:19:27.011 "claimed": false, 00:19:27.011 "zoned": false, 00:19:27.011 "supported_io_types": { 00:19:27.011 "read": true, 00:19:27.011 "write": true, 00:19:27.011 "unmap": true, 00:19:27.011 "flush": false, 00:19:27.012 "reset": true, 00:19:27.012 "nvme_admin": false, 00:19:27.012 "nvme_io": false, 00:19:27.012 "nvme_io_md": false, 00:19:27.012 "write_zeroes": true, 00:19:27.012 "zcopy": false, 00:19:27.012 "get_zone_info": false, 00:19:27.012 "zone_management": false, 00:19:27.012 "zone_append": false, 00:19:27.012 "compare": false, 00:19:27.012 "compare_and_write": false, 00:19:27.012 "abort": false, 00:19:27.012 "seek_hole": true, 00:19:27.012 "seek_data": true, 00:19:27.012 "copy": false, 00:19:27.012 "nvme_iov_md": false 00:19:27.012 }, 00:19:27.012 "driver_specific": { 00:19:27.012 "lvol": { 00:19:27.012 "lvol_store_uuid": "db6313c3-71a8-4324-a388-a01b7008b3bf", 00:19:27.012 "base_bdev": "nvme0n1", 00:19:27.012 "thin_provision": true, 00:19:27.012 "num_allocated_clusters": 0, 00:19:27.012 "snapshot": false, 00:19:27.012 "clone": false, 00:19:27.012 "esnap_clone": false 00:19:27.012 } 00:19:27.012 } 00:19:27.012 } 00:19:27.012 ]' 00:19:27.012 18:28:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:27.270 18:28:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:27.270 18:28:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:27.270 18:28:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:27.270 18:28:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:27.270 18:28:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:19:27.270 18:28:45 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:19:27.270 18:28:45 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:27.270 18:28:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:19:27.270 18:28:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 5601283d-559b-43ae-8510-0e821d5bd4aa 00:19:27.270 18:28:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=5601283d-559b-43ae-8510-0e821d5bd4aa 00:19:27.270 18:28:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:27.270 18:28:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:27.270 18:28:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:27.270 18:28:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5601283d-559b-43ae-8510-0e821d5bd4aa 00:19:27.529 18:28:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:27.529 { 00:19:27.529 "name": "5601283d-559b-43ae-8510-0e821d5bd4aa", 00:19:27.529 "aliases": [ 00:19:27.529 "lvs/nvme0n1p0" 00:19:27.529 ], 00:19:27.529 "product_name": "Logical Volume", 00:19:27.529 "block_size": 4096, 00:19:27.529 "num_blocks": 26476544, 00:19:27.529 "uuid": "5601283d-559b-43ae-8510-0e821d5bd4aa", 00:19:27.529 "assigned_rate_limits": { 00:19:27.529 "rw_ios_per_sec": 0, 00:19:27.529 "rw_mbytes_per_sec": 0, 00:19:27.529 "r_mbytes_per_sec": 0, 00:19:27.529 "w_mbytes_per_sec": 0 00:19:27.529 }, 00:19:27.529 "claimed": false, 00:19:27.529 "zoned": false, 00:19:27.529 "supported_io_types": { 00:19:27.529 "read": true, 00:19:27.529 "write": true, 00:19:27.529 "unmap": true, 00:19:27.529 "flush": false, 00:19:27.529 "reset": true, 00:19:27.529 "nvme_admin": false, 00:19:27.529 "nvme_io": false, 00:19:27.529 "nvme_io_md": false, 00:19:27.529 "write_zeroes": true, 00:19:27.529 "zcopy": false, 00:19:27.529 "get_zone_info": false, 00:19:27.529 "zone_management": false, 00:19:27.529 "zone_append": false, 00:19:27.529 "compare": false, 00:19:27.529 "compare_and_write": false, 00:19:27.529 "abort": false, 00:19:27.529 "seek_hole": true, 00:19:27.529 "seek_data": true, 00:19:27.529 "copy": false, 00:19:27.529 "nvme_iov_md": false 00:19:27.529 }, 00:19:27.529 "driver_specific": { 00:19:27.529 "lvol": { 00:19:27.529 "lvol_store_uuid": "db6313c3-71a8-4324-a388-a01b7008b3bf", 00:19:27.529 "base_bdev": "nvme0n1", 00:19:27.529 "thin_provision": true, 00:19:27.529 "num_allocated_clusters": 0, 00:19:27.529 "snapshot": false, 00:19:27.529 "clone": false, 00:19:27.529 "esnap_clone": false 00:19:27.529 } 00:19:27.529 } 00:19:27.529 } 00:19:27.529 ]' 00:19:27.529 18:28:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:27.529 18:28:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:27.529 18:28:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:27.529 18:28:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:27.529 18:28:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:27.529 18:28:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:19:27.529 18:28:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:19:27.529 18:28:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 5601283d-559b-43ae-8510-0e821d5bd4aa -c nvc0n1p0 --l2p_dram_limit 20 00:19:27.789 [2024-11-20 18:28:46.321639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.789 [2024-11-20 18:28:46.321761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:27.789 [2024-11-20 18:28:46.321778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:27.789 [2024-11-20 18:28:46.321786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.789 [2024-11-20 18:28:46.321831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.789 [2024-11-20 18:28:46.321842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:27.789 [2024-11-20 18:28:46.321848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:19:27.789 [2024-11-20 18:28:46.321855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.789 [2024-11-20 18:28:46.321869] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:27.789 [2024-11-20 18:28:46.322460] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:27.789 [2024-11-20 18:28:46.322473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.789 [2024-11-20 18:28:46.322482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:27.789 [2024-11-20 18:28:46.322489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.608 ms 00:19:27.789 [2024-11-20 18:28:46.322497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.789 [2024-11-20 18:28:46.322534] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 593fdb62-133f-434f-b1b0-645d6cfbf02e 00:19:27.789 [2024-11-20 18:28:46.323477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.789 [2024-11-20 18:28:46.323506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:27.789 [2024-11-20 18:28:46.323515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:19:27.789 [2024-11-20 18:28:46.323523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.789 [2024-11-20 18:28:46.328137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.789 [2024-11-20 18:28:46.328239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:27.789 [2024-11-20 18:28:46.328254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.582 ms 00:19:27.789 [2024-11-20 18:28:46.328260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.789 [2024-11-20 18:28:46.328359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.789 [2024-11-20 18:28:46.328369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:27.789 [2024-11-20 18:28:46.328379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:19:27.789 [2024-11-20 18:28:46.328385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.789 [2024-11-20 18:28:46.328423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.789 [2024-11-20 18:28:46.328431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:27.789 [2024-11-20 18:28:46.328439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:27.789 [2024-11-20 18:28:46.328444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.789 [2024-11-20 18:28:46.328460] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:27.789 [2024-11-20 18:28:46.331323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.789 [2024-11-20 18:28:46.331417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:27.789 [2024-11-20 18:28:46.331429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.869 ms 00:19:27.789 [2024-11-20 18:28:46.331439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.789 [2024-11-20 18:28:46.331465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.789 [2024-11-20 18:28:46.331473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:27.789 [2024-11-20 18:28:46.331480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:27.789 [2024-11-20 18:28:46.331487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.789 [2024-11-20 18:28:46.331498] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:27.790 [2024-11-20 18:28:46.331609] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:27.790 [2024-11-20 18:28:46.331619] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:27.790 [2024-11-20 18:28:46.331628] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:27.790 [2024-11-20 18:28:46.331636] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:27.790 [2024-11-20 18:28:46.331645] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:27.790 [2024-11-20 18:28:46.331650] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:27.790 [2024-11-20 18:28:46.331657] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:27.790 [2024-11-20 18:28:46.331663] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:27.790 [2024-11-20 18:28:46.331670] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:27.790 [2024-11-20 18:28:46.331675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.790 [2024-11-20 18:28:46.331684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:27.790 [2024-11-20 18:28:46.331689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.178 ms 00:19:27.790 [2024-11-20 18:28:46.331697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.790 [2024-11-20 18:28:46.331759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.790 [2024-11-20 18:28:46.331767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:27.790 [2024-11-20 18:28:46.331773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:19:27.790 [2024-11-20 18:28:46.331781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.790 [2024-11-20 18:28:46.331861] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:27.790 [2024-11-20 18:28:46.331870] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:27.790 [2024-11-20 18:28:46.331879] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:27.790 [2024-11-20 18:28:46.331887] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:27.790 [2024-11-20 18:28:46.331892] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:27.790 [2024-11-20 18:28:46.331898] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:27.790 [2024-11-20 18:28:46.331903] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:27.790 [2024-11-20 18:28:46.331910] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:27.790 [2024-11-20 18:28:46.331916] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:27.790 [2024-11-20 18:28:46.331922] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:27.790 [2024-11-20 18:28:46.331929] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:27.790 [2024-11-20 18:28:46.331935] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:27.790 [2024-11-20 18:28:46.331941] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:27.790 [2024-11-20 18:28:46.331952] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:27.790 [2024-11-20 18:28:46.331961] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:27.790 [2024-11-20 18:28:46.331969] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:27.790 [2024-11-20 18:28:46.331975] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:27.790 [2024-11-20 18:28:46.331982] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:27.790 [2024-11-20 18:28:46.331987] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:27.790 [2024-11-20 18:28:46.331993] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:27.790 [2024-11-20 18:28:46.331998] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:27.790 [2024-11-20 18:28:46.332004] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:27.790 [2024-11-20 18:28:46.332010] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:27.790 [2024-11-20 18:28:46.332017] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:27.790 [2024-11-20 18:28:46.332022] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:27.790 [2024-11-20 18:28:46.332028] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:27.790 [2024-11-20 18:28:46.332033] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:27.790 [2024-11-20 18:28:46.332039] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:27.790 [2024-11-20 18:28:46.332044] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:27.790 [2024-11-20 18:28:46.332050] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:27.790 [2024-11-20 18:28:46.332054] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:27.790 [2024-11-20 18:28:46.332062] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:27.790 [2024-11-20 18:28:46.332068] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:27.790 [2024-11-20 18:28:46.332074] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:27.790 [2024-11-20 18:28:46.332079] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:27.790 [2024-11-20 18:28:46.332085] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:27.790 [2024-11-20 18:28:46.332090] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:27.790 [2024-11-20 18:28:46.332112] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:27.790 [2024-11-20 18:28:46.332117] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:27.790 [2024-11-20 18:28:46.332124] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:27.790 [2024-11-20 18:28:46.332129] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:27.790 [2024-11-20 18:28:46.332137] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:27.790 [2024-11-20 18:28:46.332142] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:27.790 [2024-11-20 18:28:46.332148] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:27.790 [2024-11-20 18:28:46.332153] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:27.790 [2024-11-20 18:28:46.332161] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:27.790 [2024-11-20 18:28:46.332168] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:27.790 [2024-11-20 18:28:46.332177] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:27.790 [2024-11-20 18:28:46.332183] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:27.790 [2024-11-20 18:28:46.332190] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:27.790 [2024-11-20 18:28:46.332196] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:27.790 [2024-11-20 18:28:46.332202] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:27.790 [2024-11-20 18:28:46.332207] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:27.790 [2024-11-20 18:28:46.332216] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:27.790 [2024-11-20 18:28:46.332223] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:27.790 [2024-11-20 18:28:46.332231] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:27.790 [2024-11-20 18:28:46.332237] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:27.790 [2024-11-20 18:28:46.332243] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:27.790 [2024-11-20 18:28:46.332249] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:27.790 [2024-11-20 18:28:46.332256] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:27.790 [2024-11-20 18:28:46.332262] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:27.790 [2024-11-20 18:28:46.332268] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:27.790 [2024-11-20 18:28:46.332274] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:27.790 [2024-11-20 18:28:46.332282] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:27.790 [2024-11-20 18:28:46.332287] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:27.790 [2024-11-20 18:28:46.332295] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:27.790 [2024-11-20 18:28:46.332300] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:27.790 [2024-11-20 18:28:46.332307] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:27.790 [2024-11-20 18:28:46.332312] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:27.790 [2024-11-20 18:28:46.332320] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:27.790 [2024-11-20 18:28:46.332326] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:27.791 [2024-11-20 18:28:46.332333] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:27.791 [2024-11-20 18:28:46.332339] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:27.791 [2024-11-20 18:28:46.332345] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:27.791 [2024-11-20 18:28:46.332351] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:27.791 [2024-11-20 18:28:46.332358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.791 [2024-11-20 18:28:46.332365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:27.791 [2024-11-20 18:28:46.332373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.546 ms 00:19:27.791 [2024-11-20 18:28:46.332379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.791 [2024-11-20 18:28:46.332416] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:27.791 [2024-11-20 18:28:46.332424] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:32.098 [2024-11-20 18:28:50.150365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.098 [2024-11-20 18:28:50.150415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:32.098 [2024-11-20 18:28:50.150430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3817.934 ms 00:19:32.098 [2024-11-20 18:28:50.150437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.098 [2024-11-20 18:28:50.170775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.098 [2024-11-20 18:28:50.170812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:32.098 [2024-11-20 18:28:50.170824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.173 ms 00:19:32.098 [2024-11-20 18:28:50.170830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.098 [2024-11-20 18:28:50.170922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.098 [2024-11-20 18:28:50.170930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:32.098 [2024-11-20 18:28:50.170940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:19:32.098 [2024-11-20 18:28:50.170946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.098 [2024-11-20 18:28:50.206944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.099 [2024-11-20 18:28:50.206976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:32.099 [2024-11-20 18:28:50.206987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.971 ms 00:19:32.099 [2024-11-20 18:28:50.206993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.099 [2024-11-20 18:28:50.207022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.099 [2024-11-20 18:28:50.207031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:32.099 [2024-11-20 18:28:50.207039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:19:32.099 [2024-11-20 18:28:50.207045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.099 [2024-11-20 18:28:50.207378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.099 [2024-11-20 18:28:50.207392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:32.099 [2024-11-20 18:28:50.207401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:19:32.099 [2024-11-20 18:28:50.207407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.099 [2024-11-20 18:28:50.207490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.099 [2024-11-20 18:28:50.207497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:32.099 [2024-11-20 18:28:50.207506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:19:32.099 [2024-11-20 18:28:50.207511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.099 [2024-11-20 18:28:50.217996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.099 [2024-11-20 18:28:50.218024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:32.099 [2024-11-20 18:28:50.218033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.470 ms 00:19:32.099 [2024-11-20 18:28:50.218039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.099 [2024-11-20 18:28:50.226987] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:19:32.099 [2024-11-20 18:28:50.231466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.099 [2024-11-20 18:28:50.231493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:32.099 [2024-11-20 18:28:50.231501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.373 ms 00:19:32.099 [2024-11-20 18:28:50.231509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.099 [2024-11-20 18:28:50.295530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.099 [2024-11-20 18:28:50.295567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:32.099 [2024-11-20 18:28:50.295578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.003 ms 00:19:32.100 [2024-11-20 18:28:50.295586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.100 [2024-11-20 18:28:50.295710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.100 [2024-11-20 18:28:50.295721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:32.100 [2024-11-20 18:28:50.295728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:19:32.100 [2024-11-20 18:28:50.295735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.100 [2024-11-20 18:28:50.313381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.100 [2024-11-20 18:28:50.313412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:32.100 [2024-11-20 18:28:50.313421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.610 ms 00:19:32.100 [2024-11-20 18:28:50.313429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.100 [2024-11-20 18:28:50.330492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.100 [2024-11-20 18:28:50.330521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:32.100 [2024-11-20 18:28:50.330530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.048 ms 00:19:32.100 [2024-11-20 18:28:50.330537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.100 [2024-11-20 18:28:50.330959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.100 [2024-11-20 18:28:50.330968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:32.100 [2024-11-20 18:28:50.330975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.408 ms 00:19:32.100 [2024-11-20 18:28:50.330982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.100 [2024-11-20 18:28:50.389842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.100 [2024-11-20 18:28:50.389874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:32.100 [2024-11-20 18:28:50.389883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.839 ms 00:19:32.100 [2024-11-20 18:28:50.389890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.100 [2024-11-20 18:28:50.408649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.100 [2024-11-20 18:28:50.408679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:32.100 [2024-11-20 18:28:50.408687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.710 ms 00:19:32.100 [2024-11-20 18:28:50.408696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.100 [2024-11-20 18:28:50.426337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.100 [2024-11-20 18:28:50.426365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:32.101 [2024-11-20 18:28:50.426373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.616 ms 00:19:32.101 [2024-11-20 18:28:50.426379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.101 [2024-11-20 18:28:50.443876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.101 [2024-11-20 18:28:50.443906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:32.101 [2024-11-20 18:28:50.443914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.471 ms 00:19:32.101 [2024-11-20 18:28:50.443921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.101 [2024-11-20 18:28:50.443949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.101 [2024-11-20 18:28:50.443959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:32.101 [2024-11-20 18:28:50.443966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:32.101 [2024-11-20 18:28:50.443973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.101 [2024-11-20 18:28:50.444029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.101 [2024-11-20 18:28:50.444039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:32.101 [2024-11-20 18:28:50.444045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:19:32.101 [2024-11-20 18:28:50.444052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.101 [2024-11-20 18:28:50.444937] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4122.971 ms, result 0 00:19:32.101 { 00:19:32.101 "name": "ftl0", 00:19:32.101 "uuid": "593fdb62-133f-434f-b1b0-645d6cfbf02e" 00:19:32.101 } 00:19:32.101 18:28:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:19:32.101 18:28:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:19:32.101 18:28:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:19:32.101 18:28:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:19:32.361 [2024-11-20 18:28:50.740979] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:32.361 I/O size of 69632 is greater than zero copy threshold (65536). 00:19:32.361 Zero copy mechanism will not be used. 00:19:32.361 Running I/O for 4 seconds... 00:19:34.229 848.00 IOPS, 56.31 MiB/s [2024-11-20T18:28:53.792Z] 1129.50 IOPS, 75.01 MiB/s [2024-11-20T18:28:55.167Z] 1047.33 IOPS, 69.55 MiB/s [2024-11-20T18:28:55.167Z] 1060.00 IOPS, 70.39 MiB/s 00:19:36.538 Latency(us) 00:19:36.538 [2024-11-20T18:28:55.167Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:36.538 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:19:36.538 ftl0 : 4.00 1059.90 70.38 0.00 0.00 994.12 159.11 2419.79 00:19:36.538 [2024-11-20T18:28:55.167Z] =================================================================================================================== 00:19:36.538 [2024-11-20T18:28:55.167Z] Total : 1059.90 70.38 0.00 0.00 994.12 159.11 2419.79 00:19:36.538 [2024-11-20 18:28:54.748560] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:36.538 { 00:19:36.538 "results": [ 00:19:36.538 { 00:19:36.538 "job": "ftl0", 00:19:36.538 "core_mask": "0x1", 00:19:36.538 "workload": "randwrite", 00:19:36.538 "status": "finished", 00:19:36.538 "queue_depth": 1, 00:19:36.538 "io_size": 69632, 00:19:36.538 "runtime": 4.001325, 00:19:36.538 "iops": 1059.8989084865639, 00:19:36.538 "mibps": 70.38391189168588, 00:19:36.538 "io_failed": 0, 00:19:36.538 "io_timeout": 0, 00:19:36.538 "avg_latency_us": 994.1226691817967, 00:19:36.538 "min_latency_us": 159.11384615384614, 00:19:36.538 "max_latency_us": 2419.7907692307695 00:19:36.538 } 00:19:36.538 ], 00:19:36.538 "core_count": 1 00:19:36.538 } 00:19:36.538 18:28:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:19:36.538 [2024-11-20 18:28:54.857072] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:36.538 Running I/O for 4 seconds... 00:19:38.424 5727.00 IOPS, 22.37 MiB/s [2024-11-20T18:28:57.997Z] 5307.50 IOPS, 20.73 MiB/s [2024-11-20T18:28:58.940Z] 5140.00 IOPS, 20.08 MiB/s [2024-11-20T18:28:58.940Z] 5180.75 IOPS, 20.24 MiB/s 00:19:40.311 Latency(us) 00:19:40.311 [2024-11-20T18:28:58.940Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:40.311 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:19:40.311 ftl0 : 4.03 5175.53 20.22 0.00 0.00 24648.34 393.85 50412.31 00:19:40.311 [2024-11-20T18:28:58.940Z] =================================================================================================================== 00:19:40.311 [2024-11-20T18:28:58.940Z] Total : 5175.53 20.22 0.00 0.00 24648.34 0.00 50412.31 00:19:40.311 [2024-11-20 18:28:58.893824] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:40.311 { 00:19:40.311 "results": [ 00:19:40.311 { 00:19:40.311 "job": "ftl0", 00:19:40.311 "core_mask": "0x1", 00:19:40.311 "workload": "randwrite", 00:19:40.311 "status": "finished", 00:19:40.311 "queue_depth": 128, 00:19:40.311 "io_size": 4096, 00:19:40.311 "runtime": 4.028766, 00:19:40.311 "iops": 5175.530174748297, 00:19:40.311 "mibps": 20.216914745110536, 00:19:40.311 "io_failed": 0, 00:19:40.311 "io_timeout": 0, 00:19:40.311 "avg_latency_us": 24648.34298978466, 00:19:40.311 "min_latency_us": 393.84615384615387, 00:19:40.311 "max_latency_us": 50412.307692307695 00:19:40.311 } 00:19:40.311 ], 00:19:40.311 "core_count": 1 00:19:40.311 } 00:19:40.311 18:28:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:19:40.572 [2024-11-20 18:28:59.012575] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:40.572 Running I/O for 4 seconds... 00:19:42.454 5360.00 IOPS, 20.94 MiB/s [2024-11-20T18:29:02.027Z] 5014.00 IOPS, 19.59 MiB/s [2024-11-20T18:29:03.414Z] 5240.33 IOPS, 20.47 MiB/s [2024-11-20T18:29:03.414Z] 5112.50 IOPS, 19.97 MiB/s 00:19:44.785 Latency(us) 00:19:44.785 [2024-11-20T18:29:03.414Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:44.785 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:44.785 Verification LBA range: start 0x0 length 0x1400000 00:19:44.785 ftl0 : 4.02 5120.63 20.00 0.00 0.00 24907.99 278.84 40329.85 00:19:44.785 [2024-11-20T18:29:03.414Z] =================================================================================================================== 00:19:44.785 [2024-11-20T18:29:03.414Z] Total : 5120.63 20.00 0.00 0.00 24907.99 0.00 40329.85 00:19:44.785 [2024-11-20 18:29:03.045260] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:44.785 { 00:19:44.785 "results": [ 00:19:44.785 { 00:19:44.785 "job": "ftl0", 00:19:44.785 "core_mask": "0x1", 00:19:44.785 "workload": "verify", 00:19:44.785 "status": "finished", 00:19:44.785 "verify_range": { 00:19:44.785 "start": 0, 00:19:44.785 "length": 20971520 00:19:44.785 }, 00:19:44.785 "queue_depth": 128, 00:19:44.785 "io_size": 4096, 00:19:44.785 "runtime": 4.016302, 00:19:44.785 "iops": 5120.630868893823, 00:19:44.785 "mibps": 20.002464331616498, 00:19:44.785 "io_failed": 0, 00:19:44.785 "io_timeout": 0, 00:19:44.785 "avg_latency_us": 24907.988829359885, 00:19:44.785 "min_latency_us": 278.8430769230769, 00:19:44.785 "max_latency_us": 40329.846153846156 00:19:44.785 } 00:19:44.785 ], 00:19:44.785 "core_count": 1 00:19:44.785 } 00:19:44.785 18:29:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:19:44.785 [2024-11-20 18:29:03.256478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.785 [2024-11-20 18:29:03.256541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:44.785 [2024-11-20 18:29:03.256558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:44.785 [2024-11-20 18:29:03.256569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.785 [2024-11-20 18:29:03.256591] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:44.785 [2024-11-20 18:29:03.259600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.785 [2024-11-20 18:29:03.259646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:44.785 [2024-11-20 18:29:03.259660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.986 ms 00:19:44.785 [2024-11-20 18:29:03.259669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.785 [2024-11-20 18:29:03.263056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.785 [2024-11-20 18:29:03.263123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:44.785 [2024-11-20 18:29:03.263143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.351 ms 00:19:44.785 [2024-11-20 18:29:03.263152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.048 [2024-11-20 18:29:03.471216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.048 [2024-11-20 18:29:03.471275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:45.048 [2024-11-20 18:29:03.471295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 208.033 ms 00:19:45.048 [2024-11-20 18:29:03.471304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.048 [2024-11-20 18:29:03.477512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.048 [2024-11-20 18:29:03.477556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:45.048 [2024-11-20 18:29:03.477572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.158 ms 00:19:45.048 [2024-11-20 18:29:03.477581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.048 [2024-11-20 18:29:03.503960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.048 [2024-11-20 18:29:03.504010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:45.048 [2024-11-20 18:29:03.504025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.314 ms 00:19:45.048 [2024-11-20 18:29:03.504033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.048 [2024-11-20 18:29:03.522342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.048 [2024-11-20 18:29:03.522391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:45.048 [2024-11-20 18:29:03.522411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.251 ms 00:19:45.048 [2024-11-20 18:29:03.522419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.048 [2024-11-20 18:29:03.522585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.048 [2024-11-20 18:29:03.522599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:45.048 [2024-11-20 18:29:03.522615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:19:45.048 [2024-11-20 18:29:03.522624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.048 [2024-11-20 18:29:03.549149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.048 [2024-11-20 18:29:03.549197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:45.048 [2024-11-20 18:29:03.549212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.502 ms 00:19:45.048 [2024-11-20 18:29:03.549220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.048 [2024-11-20 18:29:03.574620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.048 [2024-11-20 18:29:03.574666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:45.048 [2024-11-20 18:29:03.574680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.344 ms 00:19:45.048 [2024-11-20 18:29:03.574687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.048 [2024-11-20 18:29:03.599825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.048 [2024-11-20 18:29:03.599874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:45.048 [2024-11-20 18:29:03.599890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.082 ms 00:19:45.048 [2024-11-20 18:29:03.599898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.048 [2024-11-20 18:29:03.624986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.048 [2024-11-20 18:29:03.625043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:45.048 [2024-11-20 18:29:03.625070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.990 ms 00:19:45.048 [2024-11-20 18:29:03.625081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.048 [2024-11-20 18:29:03.625151] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:45.048 [2024-11-20 18:29:03.625180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:45.048 [2024-11-20 18:29:03.625198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:45.048 [2024-11-20 18:29:03.625209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:45.048 [2024-11-20 18:29:03.625219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:45.048 [2024-11-20 18:29:03.625227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:45.048 [2024-11-20 18:29:03.625238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:45.048 [2024-11-20 18:29:03.625246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:45.048 [2024-11-20 18:29:03.625256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:45.048 [2024-11-20 18:29:03.625265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:45.048 [2024-11-20 18:29:03.625276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:45.048 [2024-11-20 18:29:03.625284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:45.049 [2024-11-20 18:29:03.625294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:45.049 [2024-11-20 18:29:03.625303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:45.049 [2024-11-20 18:29:03.625315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:45.049 [2024-11-20 18:29:03.625323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:45.049 [2024-11-20 18:29:03.625334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:45.049 [2024-11-20 18:29:03.625343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:45.049 [2024-11-20 18:29:03.625352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:45.049 [2024-11-20 18:29:03.625360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:45.049 [2024-11-20 18:29:03.625372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:45.049 [2024-11-20 18:29:03.625380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:45.049 [2024-11-20 18:29:03.625390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:45.049 [2024-11-20 18:29:03.625398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:45.049 [2024-11-20 18:29:03.625408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:45.049 [2024-11-20 18:29:03.625415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:45.049 [2024-11-20 18:29:03.625425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:45.049 [2024-11-20 18:29:03.625432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:45.049 [2024-11-20 18:29:03.625442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:45.049 [2024-11-20 18:29:03.625450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:45.049 [2024-11-20 18:29:03.625466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:45.049 [2024-11-20 18:29:03.625474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:45.049 [2024-11-20 18:29:03.625484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:45.049 [2024-11-20 18:29:03.625493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:45.049 [2024-11-20 18:29:03.625503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:45.049 [2024-11-20 18:29:03.625510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:45.049 [2024-11-20 18:29:03.625519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:45.049 [2024-11-20 18:29:03.625527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:45.049 [2024-11-20 18:29:03.625537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:45.049 [2024-11-20 18:29:03.625544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:45.049 [2024-11-20 18:29:03.625554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:45.049 [2024-11-20 18:29:03.625571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:45.049 [2024-11-20 18:29:03.625582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:45.049 [2024-11-20 18:29:03.625589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:45.049 [2024-11-20 18:29:03.625599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:45.049 [2024-11-20 18:29:03.625606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:45.049 [2024-11-20 18:29:03.625618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:45.049 [2024-11-20 18:29:03.625626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:45.049 [2024-11-20 18:29:03.625637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:45.049 [2024-11-20 18:29:03.625646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:45.049 [2024-11-20 18:29:03.625657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:45.049 [2024-11-20 18:29:03.625664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:45.049 [2024-11-20 18:29:03.625674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:45.049 [2024-11-20 18:29:03.625681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:45.049 [2024-11-20 18:29:03.625691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:45.049 [2024-11-20 18:29:03.625700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:45.049 [2024-11-20 18:29:03.625709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:45.049 [2024-11-20 18:29:03.625716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:45.049 [2024-11-20 18:29:03.625725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:45.049 [2024-11-20 18:29:03.625733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:45.049 [2024-11-20 18:29:03.625742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:45.049 [2024-11-20 18:29:03.625750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:45.049 [2024-11-20 18:29:03.625763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:45.049 [2024-11-20 18:29:03.625774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:45.049 [2024-11-20 18:29:03.625784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:45.049 [2024-11-20 18:29:03.625792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:45.049 [2024-11-20 18:29:03.625801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:45.049 [2024-11-20 18:29:03.625809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:45.049 [2024-11-20 18:29:03.625818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:45.049 [2024-11-20 18:29:03.625826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:45.049 [2024-11-20 18:29:03.625838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:45.049 [2024-11-20 18:29:03.625847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:45.049 [2024-11-20 18:29:03.625858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:45.049 [2024-11-20 18:29:03.625866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:45.049 [2024-11-20 18:29:03.625875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:45.049 [2024-11-20 18:29:03.625883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:45.049 [2024-11-20 18:29:03.625892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:45.049 [2024-11-20 18:29:03.625902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:45.049 [2024-11-20 18:29:03.625917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:45.049 [2024-11-20 18:29:03.625925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:45.049 [2024-11-20 18:29:03.625935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:45.049 [2024-11-20 18:29:03.625942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:45.049 [2024-11-20 18:29:03.625951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:45.049 [2024-11-20 18:29:03.625961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:45.049 [2024-11-20 18:29:03.625970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:45.049 [2024-11-20 18:29:03.625977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:45.049 [2024-11-20 18:29:03.625986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:45.049 [2024-11-20 18:29:03.625993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:45.049 [2024-11-20 18:29:03.626049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:45.049 [2024-11-20 18:29:03.626059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:45.049 [2024-11-20 18:29:03.626068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:45.049 [2024-11-20 18:29:03.626075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:45.049 [2024-11-20 18:29:03.626084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:45.049 [2024-11-20 18:29:03.626091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:45.049 [2024-11-20 18:29:03.626120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:45.049 [2024-11-20 18:29:03.626129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:45.049 [2024-11-20 18:29:03.626140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:45.049 [2024-11-20 18:29:03.626148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:45.049 [2024-11-20 18:29:03.626159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:45.049 [2024-11-20 18:29:03.626167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:45.049 [2024-11-20 18:29:03.626177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:45.050 [2024-11-20 18:29:03.626193] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:45.050 [2024-11-20 18:29:03.626206] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 593fdb62-133f-434f-b1b0-645d6cfbf02e 00:19:45.050 [2024-11-20 18:29:03.626215] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:45.050 [2024-11-20 18:29:03.626224] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:45.050 [2024-11-20 18:29:03.626234] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:45.050 [2024-11-20 18:29:03.626244] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:45.050 [2024-11-20 18:29:03.626253] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:45.050 [2024-11-20 18:29:03.626262] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:45.050 [2024-11-20 18:29:03.626270] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:45.050 [2024-11-20 18:29:03.626282] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:45.050 [2024-11-20 18:29:03.626290] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:45.050 [2024-11-20 18:29:03.626299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.050 [2024-11-20 18:29:03.626307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:45.050 [2024-11-20 18:29:03.626318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.152 ms 00:19:45.050 [2024-11-20 18:29:03.626325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.050 [2024-11-20 18:29:03.640081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.050 [2024-11-20 18:29:03.640136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:45.050 [2024-11-20 18:29:03.640151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.714 ms 00:19:45.050 [2024-11-20 18:29:03.640159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.050 [2024-11-20 18:29:03.640567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.050 [2024-11-20 18:29:03.640590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:45.050 [2024-11-20 18:29:03.640604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.369 ms 00:19:45.050 [2024-11-20 18:29:03.640614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.311 [2024-11-20 18:29:03.679718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:45.311 [2024-11-20 18:29:03.679768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:45.311 [2024-11-20 18:29:03.679787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:45.311 [2024-11-20 18:29:03.679798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.311 [2024-11-20 18:29:03.679864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:45.311 [2024-11-20 18:29:03.679872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:45.311 [2024-11-20 18:29:03.679883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:45.311 [2024-11-20 18:29:03.679891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.311 [2024-11-20 18:29:03.679993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:45.311 [2024-11-20 18:29:03.680010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:45.311 [2024-11-20 18:29:03.680020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:45.311 [2024-11-20 18:29:03.680028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.311 [2024-11-20 18:29:03.680048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:45.311 [2024-11-20 18:29:03.680056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:45.311 [2024-11-20 18:29:03.680066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:45.311 [2024-11-20 18:29:03.680074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.311 [2024-11-20 18:29:03.764667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:45.311 [2024-11-20 18:29:03.764725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:45.311 [2024-11-20 18:29:03.764742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:45.311 [2024-11-20 18:29:03.764751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.311 [2024-11-20 18:29:03.834402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:45.311 [2024-11-20 18:29:03.834705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:45.311 [2024-11-20 18:29:03.834735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:45.311 [2024-11-20 18:29:03.834745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.311 [2024-11-20 18:29:03.834842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:45.311 [2024-11-20 18:29:03.834854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:45.311 [2024-11-20 18:29:03.834868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:45.311 [2024-11-20 18:29:03.834876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.311 [2024-11-20 18:29:03.834947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:45.311 [2024-11-20 18:29:03.834959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:45.311 [2024-11-20 18:29:03.834971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:45.311 [2024-11-20 18:29:03.834979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.311 [2024-11-20 18:29:03.835123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:45.311 [2024-11-20 18:29:03.835137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:45.311 [2024-11-20 18:29:03.835155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:45.311 [2024-11-20 18:29:03.835163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.311 [2024-11-20 18:29:03.835202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:45.311 [2024-11-20 18:29:03.835212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:45.311 [2024-11-20 18:29:03.835223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:45.311 [2024-11-20 18:29:03.835231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.311 [2024-11-20 18:29:03.835277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:45.311 [2024-11-20 18:29:03.835286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:45.311 [2024-11-20 18:29:03.835300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:45.311 [2024-11-20 18:29:03.835312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.311 [2024-11-20 18:29:03.835364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:45.311 [2024-11-20 18:29:03.835384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:45.311 [2024-11-20 18:29:03.835396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:45.311 [2024-11-20 18:29:03.835404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.311 [2024-11-20 18:29:03.835553] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 579.022 ms, result 0 00:19:45.311 true 00:19:45.311 18:29:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 75891 00:19:45.311 18:29:03 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 75891 ']' 00:19:45.311 18:29:03 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 75891 00:19:45.311 18:29:03 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:19:45.311 18:29:03 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:45.311 18:29:03 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75891 00:19:45.311 killing process with pid 75891 00:19:45.311 Received shutdown signal, test time was about 4.000000 seconds 00:19:45.311 00:19:45.311 Latency(us) 00:19:45.311 [2024-11-20T18:29:03.940Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:45.311 [2024-11-20T18:29:03.940Z] =================================================================================================================== 00:19:45.311 [2024-11-20T18:29:03.940Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:45.311 18:29:03 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:45.311 18:29:03 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:45.311 18:29:03 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75891' 00:19:45.311 18:29:03 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 75891 00:19:45.311 18:29:03 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 75891 00:19:50.601 Remove shared memory files 00:19:50.601 18:29:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:50.601 18:29:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:19:50.601 18:29:08 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:19:50.601 18:29:08 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:19:50.601 18:29:08 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:19:50.601 18:29:08 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:19:50.601 18:29:08 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:19:50.601 18:29:08 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:19:50.601 00:19:50.601 real 0m26.524s 00:19:50.601 user 0m29.035s 00:19:50.601 sys 0m0.977s 00:19:50.601 ************************************ 00:19:50.601 END TEST ftl_bdevperf 00:19:50.601 ************************************ 00:19:50.601 18:29:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:50.601 18:29:08 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:50.601 18:29:08 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:19:50.601 18:29:08 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:50.601 18:29:08 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:50.601 18:29:08 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:50.601 ************************************ 00:19:50.601 START TEST ftl_trim 00:19:50.601 ************************************ 00:19:50.601 18:29:08 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:19:50.601 * Looking for test storage... 00:19:50.601 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:50.601 18:29:09 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:50.601 18:29:09 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lcov --version 00:19:50.601 18:29:09 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:50.601 18:29:09 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:50.601 18:29:09 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:50.601 18:29:09 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:50.601 18:29:09 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:50.601 18:29:09 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:19:50.601 18:29:09 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:19:50.601 18:29:09 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:19:50.601 18:29:09 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:19:50.601 18:29:09 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:19:50.601 18:29:09 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:19:50.601 18:29:09 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:19:50.601 18:29:09 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:50.601 18:29:09 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:19:50.601 18:29:09 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:19:50.601 18:29:09 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:50.601 18:29:09 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:50.601 18:29:09 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:19:50.601 18:29:09 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:19:50.601 18:29:09 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:50.601 18:29:09 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:19:50.601 18:29:09 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:19:50.601 18:29:09 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:19:50.601 18:29:09 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:19:50.601 18:29:09 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:50.601 18:29:09 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:19:50.601 18:29:09 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:19:50.601 18:29:09 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:50.601 18:29:09 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:50.601 18:29:09 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:19:50.601 18:29:09 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:50.601 18:29:09 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:50.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:50.601 --rc genhtml_branch_coverage=1 00:19:50.601 --rc genhtml_function_coverage=1 00:19:50.601 --rc genhtml_legend=1 00:19:50.601 --rc geninfo_all_blocks=1 00:19:50.601 --rc geninfo_unexecuted_blocks=1 00:19:50.601 00:19:50.601 ' 00:19:50.601 18:29:09 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:50.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:50.601 --rc genhtml_branch_coverage=1 00:19:50.601 --rc genhtml_function_coverage=1 00:19:50.601 --rc genhtml_legend=1 00:19:50.601 --rc geninfo_all_blocks=1 00:19:50.601 --rc geninfo_unexecuted_blocks=1 00:19:50.601 00:19:50.601 ' 00:19:50.601 18:29:09 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:50.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:50.601 --rc genhtml_branch_coverage=1 00:19:50.601 --rc genhtml_function_coverage=1 00:19:50.601 --rc genhtml_legend=1 00:19:50.601 --rc geninfo_all_blocks=1 00:19:50.601 --rc geninfo_unexecuted_blocks=1 00:19:50.601 00:19:50.601 ' 00:19:50.601 18:29:09 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:50.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:50.601 --rc genhtml_branch_coverage=1 00:19:50.601 --rc genhtml_function_coverage=1 00:19:50.601 --rc genhtml_legend=1 00:19:50.601 --rc geninfo_all_blocks=1 00:19:50.601 --rc geninfo_unexecuted_blocks=1 00:19:50.601 00:19:50.601 ' 00:19:50.601 18:29:09 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:50.601 18:29:09 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:19:50.601 18:29:09 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:50.601 18:29:09 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:50.601 18:29:09 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:50.601 18:29:09 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:50.601 18:29:09 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:50.601 18:29:09 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:50.601 18:29:09 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:50.601 18:29:09 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:50.601 18:29:09 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:50.601 18:29:09 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:50.601 18:29:09 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:50.601 18:29:09 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:50.601 18:29:09 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:50.601 18:29:09 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:50.601 18:29:09 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:50.601 18:29:09 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:50.601 18:29:09 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:50.601 18:29:09 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:50.601 18:29:09 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:50.601 18:29:09 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:50.601 18:29:09 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:50.601 18:29:09 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:50.601 18:29:09 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:50.601 18:29:09 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:50.601 18:29:09 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:50.601 18:29:09 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:50.601 18:29:09 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:50.601 18:29:09 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:50.601 18:29:09 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:19:50.601 18:29:09 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:19:50.601 18:29:09 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:19:50.601 18:29:09 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:19:50.601 18:29:09 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:19:50.601 18:29:09 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:19:50.601 18:29:09 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:19:50.601 18:29:09 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:19:50.601 18:29:09 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:50.601 18:29:09 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:50.601 18:29:09 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:19:50.601 18:29:09 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=76239 00:19:50.601 18:29:09 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 76239 00:19:50.601 18:29:09 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:19:50.601 18:29:09 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 76239 ']' 00:19:50.601 18:29:09 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:50.601 18:29:09 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:50.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:50.601 18:29:09 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:50.601 18:29:09 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:50.601 18:29:09 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:50.861 [2024-11-20 18:29:09.253235] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:19:50.861 [2024-11-20 18:29:09.253378] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76239 ] 00:19:50.861 [2024-11-20 18:29:09.416811] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:51.121 [2024-11-20 18:29:09.547436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:51.121 [2024-11-20 18:29:09.548152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:51.121 [2024-11-20 18:29:09.548162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:51.691 18:29:10 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:51.691 18:29:10 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:19:51.691 18:29:10 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:51.691 18:29:10 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:19:51.691 18:29:10 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:51.691 18:29:10 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:19:51.691 18:29:10 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:19:51.691 18:29:10 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:52.263 18:29:10 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:52.264 18:29:10 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:19:52.264 18:29:10 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:52.264 18:29:10 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:19:52.264 18:29:10 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:52.264 18:29:10 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:52.264 18:29:10 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:52.264 18:29:10 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:52.264 18:29:10 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:52.264 { 00:19:52.264 "name": "nvme0n1", 00:19:52.264 "aliases": [ 00:19:52.264 "134e459c-f0c5-43a7-967f-60b8608fa310" 00:19:52.264 ], 00:19:52.264 "product_name": "NVMe disk", 00:19:52.264 "block_size": 4096, 00:19:52.264 "num_blocks": 1310720, 00:19:52.264 "uuid": "134e459c-f0c5-43a7-967f-60b8608fa310", 00:19:52.264 "numa_id": -1, 00:19:52.264 "assigned_rate_limits": { 00:19:52.264 "rw_ios_per_sec": 0, 00:19:52.264 "rw_mbytes_per_sec": 0, 00:19:52.264 "r_mbytes_per_sec": 0, 00:19:52.264 "w_mbytes_per_sec": 0 00:19:52.264 }, 00:19:52.264 "claimed": true, 00:19:52.264 "claim_type": "read_many_write_one", 00:19:52.264 "zoned": false, 00:19:52.264 "supported_io_types": { 00:19:52.264 "read": true, 00:19:52.264 "write": true, 00:19:52.264 "unmap": true, 00:19:52.264 "flush": true, 00:19:52.264 "reset": true, 00:19:52.264 "nvme_admin": true, 00:19:52.264 "nvme_io": true, 00:19:52.264 "nvme_io_md": false, 00:19:52.264 "write_zeroes": true, 00:19:52.264 "zcopy": false, 00:19:52.264 "get_zone_info": false, 00:19:52.264 "zone_management": false, 00:19:52.264 "zone_append": false, 00:19:52.264 "compare": true, 00:19:52.264 "compare_and_write": false, 00:19:52.264 "abort": true, 00:19:52.264 "seek_hole": false, 00:19:52.264 "seek_data": false, 00:19:52.264 "copy": true, 00:19:52.264 "nvme_iov_md": false 00:19:52.264 }, 00:19:52.264 "driver_specific": { 00:19:52.264 "nvme": [ 00:19:52.264 { 00:19:52.264 "pci_address": "0000:00:11.0", 00:19:52.264 "trid": { 00:19:52.264 "trtype": "PCIe", 00:19:52.264 "traddr": "0000:00:11.0" 00:19:52.264 }, 00:19:52.264 "ctrlr_data": { 00:19:52.264 "cntlid": 0, 00:19:52.264 "vendor_id": "0x1b36", 00:19:52.264 "model_number": "QEMU NVMe Ctrl", 00:19:52.264 "serial_number": "12341", 00:19:52.264 "firmware_revision": "8.0.0", 00:19:52.264 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:52.264 "oacs": { 00:19:52.264 "security": 0, 00:19:52.264 "format": 1, 00:19:52.264 "firmware": 0, 00:19:52.264 "ns_manage": 1 00:19:52.264 }, 00:19:52.264 "multi_ctrlr": false, 00:19:52.264 "ana_reporting": false 00:19:52.264 }, 00:19:52.264 "vs": { 00:19:52.264 "nvme_version": "1.4" 00:19:52.264 }, 00:19:52.264 "ns_data": { 00:19:52.264 "id": 1, 00:19:52.264 "can_share": false 00:19:52.264 } 00:19:52.264 } 00:19:52.264 ], 00:19:52.264 "mp_policy": "active_passive" 00:19:52.264 } 00:19:52.264 } 00:19:52.264 ]' 00:19:52.264 18:29:10 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:52.264 18:29:10 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:52.264 18:29:10 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:52.264 18:29:10 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:19:52.264 18:29:10 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:19:52.264 18:29:10 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:19:52.264 18:29:10 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:19:52.264 18:29:10 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:52.264 18:29:10 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:19:52.264 18:29:10 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:52.264 18:29:10 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:52.524 18:29:11 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=db6313c3-71a8-4324-a388-a01b7008b3bf 00:19:52.524 18:29:11 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:19:52.524 18:29:11 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u db6313c3-71a8-4324-a388-a01b7008b3bf 00:19:52.785 18:29:11 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:53.045 18:29:11 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=57f3bc0d-8824-4a36-b8c4-a17ca72e81ae 00:19:53.045 18:29:11 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 57f3bc0d-8824-4a36-b8c4-a17ca72e81ae 00:19:53.305 18:29:11 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=a27c7149-0675-4b4f-9800-9e4a77fd900d 00:19:53.305 18:29:11 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 a27c7149-0675-4b4f-9800-9e4a77fd900d 00:19:53.305 18:29:11 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:19:53.305 18:29:11 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:53.305 18:29:11 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=a27c7149-0675-4b4f-9800-9e4a77fd900d 00:19:53.305 18:29:11 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:19:53.305 18:29:11 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size a27c7149-0675-4b4f-9800-9e4a77fd900d 00:19:53.305 18:29:11 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=a27c7149-0675-4b4f-9800-9e4a77fd900d 00:19:53.305 18:29:11 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:53.305 18:29:11 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:53.305 18:29:11 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:53.305 18:29:11 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a27c7149-0675-4b4f-9800-9e4a77fd900d 00:19:53.565 18:29:11 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:53.565 { 00:19:53.565 "name": "a27c7149-0675-4b4f-9800-9e4a77fd900d", 00:19:53.565 "aliases": [ 00:19:53.565 "lvs/nvme0n1p0" 00:19:53.565 ], 00:19:53.565 "product_name": "Logical Volume", 00:19:53.565 "block_size": 4096, 00:19:53.565 "num_blocks": 26476544, 00:19:53.565 "uuid": "a27c7149-0675-4b4f-9800-9e4a77fd900d", 00:19:53.565 "assigned_rate_limits": { 00:19:53.565 "rw_ios_per_sec": 0, 00:19:53.565 "rw_mbytes_per_sec": 0, 00:19:53.565 "r_mbytes_per_sec": 0, 00:19:53.565 "w_mbytes_per_sec": 0 00:19:53.565 }, 00:19:53.565 "claimed": false, 00:19:53.565 "zoned": false, 00:19:53.565 "supported_io_types": { 00:19:53.565 "read": true, 00:19:53.565 "write": true, 00:19:53.565 "unmap": true, 00:19:53.565 "flush": false, 00:19:53.565 "reset": true, 00:19:53.565 "nvme_admin": false, 00:19:53.565 "nvme_io": false, 00:19:53.565 "nvme_io_md": false, 00:19:53.566 "write_zeroes": true, 00:19:53.566 "zcopy": false, 00:19:53.566 "get_zone_info": false, 00:19:53.566 "zone_management": false, 00:19:53.566 "zone_append": false, 00:19:53.566 "compare": false, 00:19:53.566 "compare_and_write": false, 00:19:53.566 "abort": false, 00:19:53.566 "seek_hole": true, 00:19:53.566 "seek_data": true, 00:19:53.566 "copy": false, 00:19:53.566 "nvme_iov_md": false 00:19:53.566 }, 00:19:53.566 "driver_specific": { 00:19:53.566 "lvol": { 00:19:53.566 "lvol_store_uuid": "57f3bc0d-8824-4a36-b8c4-a17ca72e81ae", 00:19:53.566 "base_bdev": "nvme0n1", 00:19:53.566 "thin_provision": true, 00:19:53.566 "num_allocated_clusters": 0, 00:19:53.566 "snapshot": false, 00:19:53.566 "clone": false, 00:19:53.566 "esnap_clone": false 00:19:53.566 } 00:19:53.566 } 00:19:53.566 } 00:19:53.566 ]' 00:19:53.566 18:29:11 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:53.566 18:29:11 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:53.566 18:29:11 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:53.566 18:29:12 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:53.566 18:29:12 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:53.566 18:29:12 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:53.566 18:29:12 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:19:53.566 18:29:12 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:19:53.566 18:29:12 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:53.824 18:29:12 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:53.824 18:29:12 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:53.824 18:29:12 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size a27c7149-0675-4b4f-9800-9e4a77fd900d 00:19:53.824 18:29:12 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=a27c7149-0675-4b4f-9800-9e4a77fd900d 00:19:53.824 18:29:12 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:53.824 18:29:12 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:53.824 18:29:12 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:53.824 18:29:12 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a27c7149-0675-4b4f-9800-9e4a77fd900d 00:19:54.082 18:29:12 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:54.082 { 00:19:54.082 "name": "a27c7149-0675-4b4f-9800-9e4a77fd900d", 00:19:54.082 "aliases": [ 00:19:54.082 "lvs/nvme0n1p0" 00:19:54.082 ], 00:19:54.082 "product_name": "Logical Volume", 00:19:54.082 "block_size": 4096, 00:19:54.082 "num_blocks": 26476544, 00:19:54.082 "uuid": "a27c7149-0675-4b4f-9800-9e4a77fd900d", 00:19:54.082 "assigned_rate_limits": { 00:19:54.082 "rw_ios_per_sec": 0, 00:19:54.082 "rw_mbytes_per_sec": 0, 00:19:54.082 "r_mbytes_per_sec": 0, 00:19:54.082 "w_mbytes_per_sec": 0 00:19:54.082 }, 00:19:54.082 "claimed": false, 00:19:54.082 "zoned": false, 00:19:54.082 "supported_io_types": { 00:19:54.082 "read": true, 00:19:54.082 "write": true, 00:19:54.082 "unmap": true, 00:19:54.082 "flush": false, 00:19:54.082 "reset": true, 00:19:54.082 "nvme_admin": false, 00:19:54.082 "nvme_io": false, 00:19:54.082 "nvme_io_md": false, 00:19:54.082 "write_zeroes": true, 00:19:54.082 "zcopy": false, 00:19:54.082 "get_zone_info": false, 00:19:54.082 "zone_management": false, 00:19:54.082 "zone_append": false, 00:19:54.082 "compare": false, 00:19:54.082 "compare_and_write": false, 00:19:54.082 "abort": false, 00:19:54.082 "seek_hole": true, 00:19:54.082 "seek_data": true, 00:19:54.082 "copy": false, 00:19:54.082 "nvme_iov_md": false 00:19:54.082 }, 00:19:54.082 "driver_specific": { 00:19:54.082 "lvol": { 00:19:54.082 "lvol_store_uuid": "57f3bc0d-8824-4a36-b8c4-a17ca72e81ae", 00:19:54.082 "base_bdev": "nvme0n1", 00:19:54.082 "thin_provision": true, 00:19:54.082 "num_allocated_clusters": 0, 00:19:54.082 "snapshot": false, 00:19:54.082 "clone": false, 00:19:54.082 "esnap_clone": false 00:19:54.082 } 00:19:54.082 } 00:19:54.082 } 00:19:54.082 ]' 00:19:54.082 18:29:12 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:54.082 18:29:12 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:54.082 18:29:12 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:54.082 18:29:12 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:54.082 18:29:12 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:54.082 18:29:12 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:54.082 18:29:12 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:19:54.082 18:29:12 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:54.340 18:29:12 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:19:54.340 18:29:12 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:19:54.340 18:29:12 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size a27c7149-0675-4b4f-9800-9e4a77fd900d 00:19:54.340 18:29:12 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=a27c7149-0675-4b4f-9800-9e4a77fd900d 00:19:54.340 18:29:12 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:54.340 18:29:12 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:54.340 18:29:12 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:54.340 18:29:12 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a27c7149-0675-4b4f-9800-9e4a77fd900d 00:19:54.340 18:29:12 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:54.340 { 00:19:54.340 "name": "a27c7149-0675-4b4f-9800-9e4a77fd900d", 00:19:54.340 "aliases": [ 00:19:54.340 "lvs/nvme0n1p0" 00:19:54.340 ], 00:19:54.340 "product_name": "Logical Volume", 00:19:54.340 "block_size": 4096, 00:19:54.340 "num_blocks": 26476544, 00:19:54.340 "uuid": "a27c7149-0675-4b4f-9800-9e4a77fd900d", 00:19:54.340 "assigned_rate_limits": { 00:19:54.340 "rw_ios_per_sec": 0, 00:19:54.340 "rw_mbytes_per_sec": 0, 00:19:54.340 "r_mbytes_per_sec": 0, 00:19:54.340 "w_mbytes_per_sec": 0 00:19:54.340 }, 00:19:54.340 "claimed": false, 00:19:54.340 "zoned": false, 00:19:54.340 "supported_io_types": { 00:19:54.340 "read": true, 00:19:54.340 "write": true, 00:19:54.340 "unmap": true, 00:19:54.340 "flush": false, 00:19:54.340 "reset": true, 00:19:54.340 "nvme_admin": false, 00:19:54.340 "nvme_io": false, 00:19:54.340 "nvme_io_md": false, 00:19:54.340 "write_zeroes": true, 00:19:54.340 "zcopy": false, 00:19:54.340 "get_zone_info": false, 00:19:54.340 "zone_management": false, 00:19:54.340 "zone_append": false, 00:19:54.340 "compare": false, 00:19:54.340 "compare_and_write": false, 00:19:54.340 "abort": false, 00:19:54.340 "seek_hole": true, 00:19:54.340 "seek_data": true, 00:19:54.340 "copy": false, 00:19:54.340 "nvme_iov_md": false 00:19:54.340 }, 00:19:54.340 "driver_specific": { 00:19:54.340 "lvol": { 00:19:54.341 "lvol_store_uuid": "57f3bc0d-8824-4a36-b8c4-a17ca72e81ae", 00:19:54.341 "base_bdev": "nvme0n1", 00:19:54.341 "thin_provision": true, 00:19:54.341 "num_allocated_clusters": 0, 00:19:54.341 "snapshot": false, 00:19:54.341 "clone": false, 00:19:54.341 "esnap_clone": false 00:19:54.341 } 00:19:54.341 } 00:19:54.341 } 00:19:54.341 ]' 00:19:54.341 18:29:12 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:54.599 18:29:12 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:54.599 18:29:12 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:54.599 18:29:13 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:54.599 18:29:13 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:54.600 18:29:13 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:54.600 18:29:13 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:19:54.600 18:29:13 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d a27c7149-0675-4b4f-9800-9e4a77fd900d -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:19:54.600 [2024-11-20 18:29:13.204038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.600 [2024-11-20 18:29:13.204079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:54.600 [2024-11-20 18:29:13.204105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:54.600 [2024-11-20 18:29:13.204115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.600 [2024-11-20 18:29:13.206942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.600 [2024-11-20 18:29:13.207076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:54.600 [2024-11-20 18:29:13.207108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.802 ms 00:19:54.600 [2024-11-20 18:29:13.207117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.600 [2024-11-20 18:29:13.207227] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:54.600 [2024-11-20 18:29:13.207910] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:54.600 [2024-11-20 18:29:13.207935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.600 [2024-11-20 18:29:13.207945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:54.600 [2024-11-20 18:29:13.207955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.716 ms 00:19:54.600 [2024-11-20 18:29:13.207963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.600 [2024-11-20 18:29:13.208061] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID cbb96cf1-d64b-463e-90b3-2025057ee758 00:19:54.600 [2024-11-20 18:29:13.209190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.600 [2024-11-20 18:29:13.209216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:54.600 [2024-11-20 18:29:13.209226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:19:54.600 [2024-11-20 18:29:13.209235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.600 [2024-11-20 18:29:13.214758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.600 [2024-11-20 18:29:13.214785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:54.600 [2024-11-20 18:29:13.214797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.460 ms 00:19:54.600 [2024-11-20 18:29:13.214808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.600 [2024-11-20 18:29:13.214922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.600 [2024-11-20 18:29:13.214939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:54.600 [2024-11-20 18:29:13.214948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:19:54.600 [2024-11-20 18:29:13.214960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.600 [2024-11-20 18:29:13.214993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.600 [2024-11-20 18:29:13.215004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:54.600 [2024-11-20 18:29:13.215013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:54.600 [2024-11-20 18:29:13.215022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.600 [2024-11-20 18:29:13.215052] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:54.600 [2024-11-20 18:29:13.218635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.600 [2024-11-20 18:29:13.218662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:54.600 [2024-11-20 18:29:13.218677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.586 ms 00:19:54.600 [2024-11-20 18:29:13.218685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.600 [2024-11-20 18:29:13.218729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.600 [2024-11-20 18:29:13.218739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:54.600 [2024-11-20 18:29:13.218749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:54.600 [2024-11-20 18:29:13.218768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.600 [2024-11-20 18:29:13.218794] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:54.600 [2024-11-20 18:29:13.218925] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:54.600 [2024-11-20 18:29:13.218945] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:54.600 [2024-11-20 18:29:13.218956] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:54.600 [2024-11-20 18:29:13.218969] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:54.600 [2024-11-20 18:29:13.218978] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:54.600 [2024-11-20 18:29:13.218989] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:54.600 [2024-11-20 18:29:13.218996] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:54.600 [2024-11-20 18:29:13.219006] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:54.600 [2024-11-20 18:29:13.219015] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:54.600 [2024-11-20 18:29:13.219025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.600 [2024-11-20 18:29:13.219032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:54.600 [2024-11-20 18:29:13.219042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.232 ms 00:19:54.600 [2024-11-20 18:29:13.219050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.600 [2024-11-20 18:29:13.219166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.600 [2024-11-20 18:29:13.219177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:54.600 [2024-11-20 18:29:13.219187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:19:54.600 [2024-11-20 18:29:13.219195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.600 [2024-11-20 18:29:13.219304] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:54.600 [2024-11-20 18:29:13.219318] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:54.600 [2024-11-20 18:29:13.219329] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:54.600 [2024-11-20 18:29:13.219337] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:54.600 [2024-11-20 18:29:13.219347] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:54.600 [2024-11-20 18:29:13.219354] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:54.600 [2024-11-20 18:29:13.219362] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:54.600 [2024-11-20 18:29:13.219369] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:54.600 [2024-11-20 18:29:13.219378] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:54.600 [2024-11-20 18:29:13.219385] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:54.600 [2024-11-20 18:29:13.219393] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:54.600 [2024-11-20 18:29:13.219401] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:54.600 [2024-11-20 18:29:13.219410] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:54.600 [2024-11-20 18:29:13.219417] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:54.600 [2024-11-20 18:29:13.219426] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:54.600 [2024-11-20 18:29:13.219432] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:54.600 [2024-11-20 18:29:13.219442] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:54.600 [2024-11-20 18:29:13.219449] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:54.600 [2024-11-20 18:29:13.219456] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:54.600 [2024-11-20 18:29:13.219464] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:54.600 [2024-11-20 18:29:13.219473] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:54.600 [2024-11-20 18:29:13.219480] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:54.600 [2024-11-20 18:29:13.219488] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:54.600 [2024-11-20 18:29:13.219495] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:54.600 [2024-11-20 18:29:13.219504] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:54.600 [2024-11-20 18:29:13.219510] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:54.600 [2024-11-20 18:29:13.219518] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:54.600 [2024-11-20 18:29:13.219525] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:54.600 [2024-11-20 18:29:13.219533] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:54.600 [2024-11-20 18:29:13.219540] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:54.600 [2024-11-20 18:29:13.219548] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:54.600 [2024-11-20 18:29:13.219554] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:54.600 [2024-11-20 18:29:13.219565] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:54.600 [2024-11-20 18:29:13.219572] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:54.600 [2024-11-20 18:29:13.219581] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:54.600 [2024-11-20 18:29:13.219587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:54.600 [2024-11-20 18:29:13.219596] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:54.600 [2024-11-20 18:29:13.219603] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:54.600 [2024-11-20 18:29:13.219612] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:54.600 [2024-11-20 18:29:13.219619] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:54.600 [2024-11-20 18:29:13.219627] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:54.601 [2024-11-20 18:29:13.219634] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:54.601 [2024-11-20 18:29:13.219642] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:54.601 [2024-11-20 18:29:13.219649] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:54.601 [2024-11-20 18:29:13.219657] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:54.601 [2024-11-20 18:29:13.219665] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:54.601 [2024-11-20 18:29:13.219674] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:54.601 [2024-11-20 18:29:13.219683] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:54.601 [2024-11-20 18:29:13.219695] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:54.601 [2024-11-20 18:29:13.219702] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:54.601 [2024-11-20 18:29:13.219711] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:54.601 [2024-11-20 18:29:13.219718] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:54.601 [2024-11-20 18:29:13.219726] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:54.601 [2024-11-20 18:29:13.219736] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:54.601 [2024-11-20 18:29:13.219748] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:54.601 [2024-11-20 18:29:13.219756] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:54.601 [2024-11-20 18:29:13.219766] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:54.601 [2024-11-20 18:29:13.219774] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:54.601 [2024-11-20 18:29:13.219782] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:54.601 [2024-11-20 18:29:13.219790] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:54.601 [2024-11-20 18:29:13.219798] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:54.601 [2024-11-20 18:29:13.219806] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:54.601 [2024-11-20 18:29:13.219814] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:54.601 [2024-11-20 18:29:13.219822] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:54.601 [2024-11-20 18:29:13.219832] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:54.601 [2024-11-20 18:29:13.219839] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:54.601 [2024-11-20 18:29:13.219848] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:54.601 [2024-11-20 18:29:13.219855] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:54.601 [2024-11-20 18:29:13.219864] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:54.601 [2024-11-20 18:29:13.219871] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:54.601 [2024-11-20 18:29:13.219886] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:54.601 [2024-11-20 18:29:13.219894] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:54.601 [2024-11-20 18:29:13.219903] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:54.601 [2024-11-20 18:29:13.219910] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:54.601 [2024-11-20 18:29:13.219920] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:54.601 [2024-11-20 18:29:13.219927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.601 [2024-11-20 18:29:13.219936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:54.601 [2024-11-20 18:29:13.219944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.693 ms 00:19:54.601 [2024-11-20 18:29:13.219952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.601 [2024-11-20 18:29:13.220014] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:54.601 [2024-11-20 18:29:13.220027] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:57.126 [2024-11-20 18:29:15.744025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.126 [2024-11-20 18:29:15.744080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:57.126 [2024-11-20 18:29:15.744104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2524.003 ms 00:19:57.126 [2024-11-20 18:29:15.744114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.385 [2024-11-20 18:29:15.769900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.385 [2024-11-20 18:29:15.769943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:57.385 [2024-11-20 18:29:15.769957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.528 ms 00:19:57.385 [2024-11-20 18:29:15.769967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.385 [2024-11-20 18:29:15.770088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.385 [2024-11-20 18:29:15.770126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:57.385 [2024-11-20 18:29:15.770137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:19:57.385 [2024-11-20 18:29:15.770148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.385 [2024-11-20 18:29:15.810896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.385 [2024-11-20 18:29:15.810938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:57.385 [2024-11-20 18:29:15.810950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.705 ms 00:19:57.385 [2024-11-20 18:29:15.810960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.385 [2024-11-20 18:29:15.811034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.385 [2024-11-20 18:29:15.811052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:57.385 [2024-11-20 18:29:15.811062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:57.385 [2024-11-20 18:29:15.811072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.385 [2024-11-20 18:29:15.811407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.385 [2024-11-20 18:29:15.811433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:57.385 [2024-11-20 18:29:15.811443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.302 ms 00:19:57.385 [2024-11-20 18:29:15.811452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.385 [2024-11-20 18:29:15.811557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.385 [2024-11-20 18:29:15.811573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:57.385 [2024-11-20 18:29:15.811582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:19:57.385 [2024-11-20 18:29:15.811593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.385 [2024-11-20 18:29:15.827835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.385 [2024-11-20 18:29:15.827865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:57.385 [2024-11-20 18:29:15.827876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.208 ms 00:19:57.385 [2024-11-20 18:29:15.827885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.385 [2024-11-20 18:29:15.839266] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:57.385 [2024-11-20 18:29:15.853731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.385 [2024-11-20 18:29:15.853886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:57.385 [2024-11-20 18:29:15.853906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.764 ms 00:19:57.385 [2024-11-20 18:29:15.853914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.385 [2024-11-20 18:29:15.920296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.385 [2024-11-20 18:29:15.920334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:57.385 [2024-11-20 18:29:15.920349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.305 ms 00:19:57.385 [2024-11-20 18:29:15.920357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.385 [2024-11-20 18:29:15.920562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.385 [2024-11-20 18:29:15.920578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:57.385 [2024-11-20 18:29:15.920592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.136 ms 00:19:57.385 [2024-11-20 18:29:15.920608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.385 [2024-11-20 18:29:15.943774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.385 [2024-11-20 18:29:15.943904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:57.385 [2024-11-20 18:29:15.943926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.141 ms 00:19:57.385 [2024-11-20 18:29:15.943934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.385 [2024-11-20 18:29:15.966555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.385 [2024-11-20 18:29:15.966676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:57.385 [2024-11-20 18:29:15.966696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.566 ms 00:19:57.385 [2024-11-20 18:29:15.966704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.385 [2024-11-20 18:29:15.967300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.385 [2024-11-20 18:29:15.967318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:57.385 [2024-11-20 18:29:15.967330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.544 ms 00:19:57.385 [2024-11-20 18:29:15.967338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.644 [2024-11-20 18:29:16.045386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.644 [2024-11-20 18:29:16.045520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:57.644 [2024-11-20 18:29:16.045547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 78.018 ms 00:19:57.644 [2024-11-20 18:29:16.045556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.644 [2024-11-20 18:29:16.069973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.644 [2024-11-20 18:29:16.070008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:57.644 [2024-11-20 18:29:16.070023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.050 ms 00:19:57.644 [2024-11-20 18:29:16.070032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.644 [2024-11-20 18:29:16.093038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.644 [2024-11-20 18:29:16.093226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:57.644 [2024-11-20 18:29:16.093247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.949 ms 00:19:57.644 [2024-11-20 18:29:16.093255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.644 [2024-11-20 18:29:16.116473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.644 [2024-11-20 18:29:16.116589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:57.644 [2024-11-20 18:29:16.116609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.154 ms 00:19:57.644 [2024-11-20 18:29:16.116627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.644 [2024-11-20 18:29:16.116687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.644 [2024-11-20 18:29:16.116704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:57.644 [2024-11-20 18:29:16.116717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:57.644 [2024-11-20 18:29:16.116725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.644 [2024-11-20 18:29:16.116791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.644 [2024-11-20 18:29:16.116805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:57.644 [2024-11-20 18:29:16.116816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:19:57.644 [2024-11-20 18:29:16.116823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.644 [2024-11-20 18:29:16.117634] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:57.644 { 00:19:57.644 "name": "ftl0", 00:19:57.644 "uuid": "cbb96cf1-d64b-463e-90b3-2025057ee758" 00:19:57.644 } 00:19:57.644 [2024-11-20 18:29:16.120444] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2913.315 ms, result 0 00:19:57.644 [2024-11-20 18:29:16.121211] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:57.644 18:29:16 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:19:57.644 18:29:16 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:19:57.644 18:29:16 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:57.644 18:29:16 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:19:57.644 18:29:16 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:57.644 18:29:16 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:57.644 18:29:16 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:19:57.902 18:29:16 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:19:57.902 [ 00:19:57.902 { 00:19:57.902 "name": "ftl0", 00:19:57.902 "aliases": [ 00:19:57.902 "cbb96cf1-d64b-463e-90b3-2025057ee758" 00:19:57.902 ], 00:19:57.902 "product_name": "FTL disk", 00:19:57.902 "block_size": 4096, 00:19:57.902 "num_blocks": 23592960, 00:19:57.902 "uuid": "cbb96cf1-d64b-463e-90b3-2025057ee758", 00:19:57.902 "assigned_rate_limits": { 00:19:57.902 "rw_ios_per_sec": 0, 00:19:57.902 "rw_mbytes_per_sec": 0, 00:19:57.902 "r_mbytes_per_sec": 0, 00:19:57.902 "w_mbytes_per_sec": 0 00:19:57.902 }, 00:19:57.902 "claimed": false, 00:19:57.902 "zoned": false, 00:19:57.902 "supported_io_types": { 00:19:57.902 "read": true, 00:19:57.902 "write": true, 00:19:57.902 "unmap": true, 00:19:57.902 "flush": true, 00:19:57.902 "reset": false, 00:19:57.902 "nvme_admin": false, 00:19:57.902 "nvme_io": false, 00:19:57.902 "nvme_io_md": false, 00:19:57.902 "write_zeroes": true, 00:19:57.902 "zcopy": false, 00:19:57.902 "get_zone_info": false, 00:19:57.902 "zone_management": false, 00:19:57.902 "zone_append": false, 00:19:57.902 "compare": false, 00:19:57.902 "compare_and_write": false, 00:19:57.902 "abort": false, 00:19:57.902 "seek_hole": false, 00:19:57.902 "seek_data": false, 00:19:57.902 "copy": false, 00:19:57.902 "nvme_iov_md": false 00:19:57.902 }, 00:19:57.902 "driver_specific": { 00:19:57.902 "ftl": { 00:19:57.902 "base_bdev": "a27c7149-0675-4b4f-9800-9e4a77fd900d", 00:19:57.902 "cache": "nvc0n1p0" 00:19:57.902 } 00:19:57.902 } 00:19:57.902 } 00:19:57.902 ] 00:19:58.159 18:29:16 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:19:58.159 18:29:16 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:19:58.159 18:29:16 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:19:58.159 18:29:16 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:19:58.159 18:29:16 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:19:58.418 18:29:16 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:19:58.418 { 00:19:58.418 "name": "ftl0", 00:19:58.418 "aliases": [ 00:19:58.418 "cbb96cf1-d64b-463e-90b3-2025057ee758" 00:19:58.418 ], 00:19:58.418 "product_name": "FTL disk", 00:19:58.418 "block_size": 4096, 00:19:58.418 "num_blocks": 23592960, 00:19:58.418 "uuid": "cbb96cf1-d64b-463e-90b3-2025057ee758", 00:19:58.418 "assigned_rate_limits": { 00:19:58.418 "rw_ios_per_sec": 0, 00:19:58.418 "rw_mbytes_per_sec": 0, 00:19:58.418 "r_mbytes_per_sec": 0, 00:19:58.418 "w_mbytes_per_sec": 0 00:19:58.418 }, 00:19:58.418 "claimed": false, 00:19:58.418 "zoned": false, 00:19:58.418 "supported_io_types": { 00:19:58.418 "read": true, 00:19:58.418 "write": true, 00:19:58.418 "unmap": true, 00:19:58.418 "flush": true, 00:19:58.418 "reset": false, 00:19:58.418 "nvme_admin": false, 00:19:58.418 "nvme_io": false, 00:19:58.418 "nvme_io_md": false, 00:19:58.418 "write_zeroes": true, 00:19:58.418 "zcopy": false, 00:19:58.418 "get_zone_info": false, 00:19:58.418 "zone_management": false, 00:19:58.418 "zone_append": false, 00:19:58.418 "compare": false, 00:19:58.418 "compare_and_write": false, 00:19:58.418 "abort": false, 00:19:58.418 "seek_hole": false, 00:19:58.418 "seek_data": false, 00:19:58.418 "copy": false, 00:19:58.418 "nvme_iov_md": false 00:19:58.418 }, 00:19:58.418 "driver_specific": { 00:19:58.418 "ftl": { 00:19:58.418 "base_bdev": "a27c7149-0675-4b4f-9800-9e4a77fd900d", 00:19:58.418 "cache": "nvc0n1p0" 00:19:58.418 } 00:19:58.418 } 00:19:58.418 } 00:19:58.418 ]' 00:19:58.418 18:29:16 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:19:58.418 18:29:16 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:19:58.418 18:29:16 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:19:58.676 [2024-11-20 18:29:17.157181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.676 [2024-11-20 18:29:17.157222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:58.676 [2024-11-20 18:29:17.157237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:58.676 [2024-11-20 18:29:17.157250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.676 [2024-11-20 18:29:17.157277] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:58.676 [2024-11-20 18:29:17.159987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.676 [2024-11-20 18:29:17.160130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:58.676 [2024-11-20 18:29:17.160154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.693 ms 00:19:58.676 [2024-11-20 18:29:17.160163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.676 [2024-11-20 18:29:17.160740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.676 [2024-11-20 18:29:17.160754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:58.676 [2024-11-20 18:29:17.160767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.544 ms 00:19:58.676 [2024-11-20 18:29:17.160775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.676 [2024-11-20 18:29:17.164419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.676 [2024-11-20 18:29:17.164439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:58.676 [2024-11-20 18:29:17.164449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.619 ms 00:19:58.676 [2024-11-20 18:29:17.164457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.676 [2024-11-20 18:29:17.171554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.676 [2024-11-20 18:29:17.171648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:58.676 [2024-11-20 18:29:17.171712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.037 ms 00:19:58.676 [2024-11-20 18:29:17.171741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.676 [2024-11-20 18:29:17.195470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.676 [2024-11-20 18:29:17.195576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:58.676 [2024-11-20 18:29:17.195640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.547 ms 00:19:58.676 [2024-11-20 18:29:17.195668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.676 [2024-11-20 18:29:17.210447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.676 [2024-11-20 18:29:17.210566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:58.676 [2024-11-20 18:29:17.210635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.458 ms 00:19:58.676 [2024-11-20 18:29:17.210667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.676 [2024-11-20 18:29:17.210993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.676 [2024-11-20 18:29:17.211072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:58.676 [2024-11-20 18:29:17.211139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.158 ms 00:19:58.676 [2024-11-20 18:29:17.211168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.676 [2024-11-20 18:29:17.233900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.676 [2024-11-20 18:29:17.233997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:58.676 [2024-11-20 18:29:17.234053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.693 ms 00:19:58.676 [2024-11-20 18:29:17.234079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.676 [2024-11-20 18:29:17.256736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.676 [2024-11-20 18:29:17.256832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:58.676 [2024-11-20 18:29:17.256888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.499 ms 00:19:58.676 [2024-11-20 18:29:17.256914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.676 [2024-11-20 18:29:17.278935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.676 [2024-11-20 18:29:17.279030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:58.676 [2024-11-20 18:29:17.279086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.950 ms 00:19:58.676 [2024-11-20 18:29:17.279123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.676 [2024-11-20 18:29:17.301196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.676 [2024-11-20 18:29:17.301291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:58.676 [2024-11-20 18:29:17.301344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.884 ms 00:19:58.676 [2024-11-20 18:29:17.301366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.676 [2024-11-20 18:29:17.301496] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:58.676 [2024-11-20 18:29:17.301547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:58.676 [2024-11-20 18:29:17.301709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:58.676 [2024-11-20 18:29:17.301745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:58.676 [2024-11-20 18:29:17.301783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:58.676 [2024-11-20 18:29:17.301853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:58.676 [2024-11-20 18:29:17.301893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:58.676 [2024-11-20 18:29:17.301928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:58.676 [2024-11-20 18:29:17.301998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:58.676 [2024-11-20 18:29:17.302035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:58.676 [2024-11-20 18:29:17.302071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:58.676 [2024-11-20 18:29:17.302117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:58.676 [2024-11-20 18:29:17.302188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:58.676 [2024-11-20 18:29:17.302222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:58.676 [2024-11-20 18:29:17.302258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:58.676 [2024-11-20 18:29:17.302332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:58.676 [2024-11-20 18:29:17.302436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:58.676 [2024-11-20 18:29:17.302472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:58.676 [2024-11-20 18:29:17.302510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:58.676 [2024-11-20 18:29:17.302545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:58.677 [2024-11-20 18:29:17.302651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:58.677 [2024-11-20 18:29:17.302685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:58.677 [2024-11-20 18:29:17.302733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:58.677 [2024-11-20 18:29:17.302768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:58.677 [2024-11-20 18:29:17.302840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:58.677 [2024-11-20 18:29:17.302876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:58.677 [2024-11-20 18:29:17.302912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:58.677 [2024-11-20 18:29:17.302947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:58.677 [2024-11-20 18:29:17.303023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:58.677 [2024-11-20 18:29:17.303057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:58.677 [2024-11-20 18:29:17.303103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:58.936 [2024-11-20 18:29:17.303170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:58.936 [2024-11-20 18:29:17.303278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:58.936 [2024-11-20 18:29:17.303314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:58.936 [2024-11-20 18:29:17.303351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:58.936 [2024-11-20 18:29:17.303385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:58.936 [2024-11-20 18:29:17.303499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:58.936 [2024-11-20 18:29:17.303527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:58.936 [2024-11-20 18:29:17.303539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:58.936 [2024-11-20 18:29:17.303547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:58.936 [2024-11-20 18:29:17.303557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:58.936 [2024-11-20 18:29:17.303564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:58.936 [2024-11-20 18:29:17.303575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:58.936 [2024-11-20 18:29:17.303583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:58.936 [2024-11-20 18:29:17.303593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:58.936 [2024-11-20 18:29:17.303600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:58.936 [2024-11-20 18:29:17.303610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:58.936 [2024-11-20 18:29:17.303618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:58.936 [2024-11-20 18:29:17.303627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:58.936 [2024-11-20 18:29:17.303635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:58.936 [2024-11-20 18:29:17.303644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:58.936 [2024-11-20 18:29:17.303651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:58.936 [2024-11-20 18:29:17.303661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:58.936 [2024-11-20 18:29:17.303668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:58.936 [2024-11-20 18:29:17.303679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:58.936 [2024-11-20 18:29:17.303687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:58.936 [2024-11-20 18:29:17.303696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:58.936 [2024-11-20 18:29:17.303704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:58.936 [2024-11-20 18:29:17.303713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:58.936 [2024-11-20 18:29:17.303721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:58.936 [2024-11-20 18:29:17.303730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:58.936 [2024-11-20 18:29:17.303738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:58.936 [2024-11-20 18:29:17.303747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:58.936 [2024-11-20 18:29:17.303755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:58.936 [2024-11-20 18:29:17.303765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:58.936 [2024-11-20 18:29:17.303772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:58.936 [2024-11-20 18:29:17.303782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:58.936 [2024-11-20 18:29:17.303790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:58.936 [2024-11-20 18:29:17.303799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:58.936 [2024-11-20 18:29:17.303807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:58.936 [2024-11-20 18:29:17.303819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:58.936 [2024-11-20 18:29:17.303826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:58.936 [2024-11-20 18:29:17.303836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:58.936 [2024-11-20 18:29:17.303843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:58.936 [2024-11-20 18:29:17.303852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:58.936 [2024-11-20 18:29:17.303860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:58.936 [2024-11-20 18:29:17.303870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:58.936 [2024-11-20 18:29:17.303877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:58.936 [2024-11-20 18:29:17.303887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:58.936 [2024-11-20 18:29:17.303895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:58.936 [2024-11-20 18:29:17.303905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:58.936 [2024-11-20 18:29:17.303912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:58.936 [2024-11-20 18:29:17.303921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:58.936 [2024-11-20 18:29:17.303928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:58.936 [2024-11-20 18:29:17.303938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:58.936 [2024-11-20 18:29:17.303946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:58.936 [2024-11-20 18:29:17.303957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:58.936 [2024-11-20 18:29:17.303965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:58.936 [2024-11-20 18:29:17.303974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:58.936 [2024-11-20 18:29:17.303982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:58.936 [2024-11-20 18:29:17.303991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:58.936 [2024-11-20 18:29:17.303998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:58.936 [2024-11-20 18:29:17.304007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:58.936 [2024-11-20 18:29:17.304015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:58.936 [2024-11-20 18:29:17.304025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:58.936 [2024-11-20 18:29:17.304033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:58.936 [2024-11-20 18:29:17.304044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:58.937 [2024-11-20 18:29:17.304051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:58.937 [2024-11-20 18:29:17.304061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:58.937 [2024-11-20 18:29:17.304069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:58.937 [2024-11-20 18:29:17.304079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:58.937 [2024-11-20 18:29:17.304118] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:58.937 [2024-11-20 18:29:17.304131] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: cbb96cf1-d64b-463e-90b3-2025057ee758 00:19:58.937 [2024-11-20 18:29:17.304140] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:58.937 [2024-11-20 18:29:17.304149] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:58.937 [2024-11-20 18:29:17.304158] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:58.937 [2024-11-20 18:29:17.304171] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:58.937 [2024-11-20 18:29:17.304181] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:58.937 [2024-11-20 18:29:17.304191] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:58.937 [2024-11-20 18:29:17.304198] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:58.937 [2024-11-20 18:29:17.304206] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:58.937 [2024-11-20 18:29:17.304212] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:58.937 [2024-11-20 18:29:17.304221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.937 [2024-11-20 18:29:17.304229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:58.937 [2024-11-20 18:29:17.304240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.729 ms 00:19:58.937 [2024-11-20 18:29:17.304248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.937 [2024-11-20 18:29:17.316517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.937 [2024-11-20 18:29:17.316542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:58.937 [2024-11-20 18:29:17.316560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.226 ms 00:19:58.937 [2024-11-20 18:29:17.316568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.937 [2024-11-20 18:29:17.316944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.937 [2024-11-20 18:29:17.316959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:58.937 [2024-11-20 18:29:17.316971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.325 ms 00:19:58.937 [2024-11-20 18:29:17.316979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.937 [2024-11-20 18:29:17.360869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:58.937 [2024-11-20 18:29:17.360899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:58.937 [2024-11-20 18:29:17.360911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:58.937 [2024-11-20 18:29:17.360919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.937 [2024-11-20 18:29:17.361007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:58.937 [2024-11-20 18:29:17.361021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:58.937 [2024-11-20 18:29:17.361031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:58.937 [2024-11-20 18:29:17.361039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.937 [2024-11-20 18:29:17.361090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:58.937 [2024-11-20 18:29:17.361116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:58.937 [2024-11-20 18:29:17.361131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:58.937 [2024-11-20 18:29:17.361138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.937 [2024-11-20 18:29:17.361165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:58.937 [2024-11-20 18:29:17.361173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:58.937 [2024-11-20 18:29:17.361183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:58.937 [2024-11-20 18:29:17.361191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.937 [2024-11-20 18:29:17.442700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:58.937 [2024-11-20 18:29:17.442734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:58.937 [2024-11-20 18:29:17.442746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:58.937 [2024-11-20 18:29:17.442754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.937 [2024-11-20 18:29:17.506043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:58.937 [2024-11-20 18:29:17.506206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:58.937 [2024-11-20 18:29:17.506225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:58.937 [2024-11-20 18:29:17.506233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.937 [2024-11-20 18:29:17.506301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:58.937 [2024-11-20 18:29:17.506315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:58.937 [2024-11-20 18:29:17.506336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:58.937 [2024-11-20 18:29:17.506347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.937 [2024-11-20 18:29:17.506399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:58.937 [2024-11-20 18:29:17.506412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:58.937 [2024-11-20 18:29:17.506422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:58.937 [2024-11-20 18:29:17.506429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.937 [2024-11-20 18:29:17.506536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:58.937 [2024-11-20 18:29:17.506551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:58.937 [2024-11-20 18:29:17.506561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:58.937 [2024-11-20 18:29:17.506568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.937 [2024-11-20 18:29:17.506616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:58.937 [2024-11-20 18:29:17.506629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:58.937 [2024-11-20 18:29:17.506639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:58.937 [2024-11-20 18:29:17.506647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.937 [2024-11-20 18:29:17.506694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:58.937 [2024-11-20 18:29:17.506707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:58.937 [2024-11-20 18:29:17.506718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:58.937 [2024-11-20 18:29:17.506725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.937 [2024-11-20 18:29:17.506778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:58.937 [2024-11-20 18:29:17.506792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:58.937 [2024-11-20 18:29:17.506803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:58.937 [2024-11-20 18:29:17.506810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.937 [2024-11-20 18:29:17.506989] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 349.788 ms, result 0 00:19:58.937 true 00:19:58.937 18:29:17 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 76239 00:19:58.937 18:29:17 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76239 ']' 00:19:58.937 18:29:17 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76239 00:19:58.937 18:29:17 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:19:58.937 18:29:17 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:58.937 18:29:17 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76239 00:19:58.937 18:29:17 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:58.937 18:29:17 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:58.937 18:29:17 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76239' 00:19:58.937 killing process with pid 76239 00:19:58.937 18:29:17 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 76239 00:19:58.937 18:29:17 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 76239 00:20:05.491 18:29:23 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:20:05.491 65536+0 records in 00:20:05.491 65536+0 records out 00:20:05.491 268435456 bytes (268 MB, 256 MiB) copied, 0.803659 s, 334 MB/s 00:20:05.491 18:29:23 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:05.491 [2024-11-20 18:29:24.025921] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:20:05.491 [2024-11-20 18:29:24.026010] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76421 ] 00:20:05.750 [2024-11-20 18:29:24.175257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:05.750 [2024-11-20 18:29:24.252642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:06.009 [2024-11-20 18:29:24.456801] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:06.009 [2024-11-20 18:29:24.456851] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:06.009 [2024-11-20 18:29:24.604902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.009 [2024-11-20 18:29:24.604935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:06.009 [2024-11-20 18:29:24.604946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:06.009 [2024-11-20 18:29:24.604952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.009 [2024-11-20 18:29:24.606986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.009 [2024-11-20 18:29:24.607015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:06.009 [2024-11-20 18:29:24.607023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.022 ms 00:20:06.009 [2024-11-20 18:29:24.607029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.009 [2024-11-20 18:29:24.607084] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:06.009 [2024-11-20 18:29:24.607686] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:06.009 [2024-11-20 18:29:24.607702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.009 [2024-11-20 18:29:24.607709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:06.009 [2024-11-20 18:29:24.607716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.623 ms 00:20:06.009 [2024-11-20 18:29:24.607722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.009 [2024-11-20 18:29:24.608682] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:06.009 [2024-11-20 18:29:24.617999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.009 [2024-11-20 18:29:24.618136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:06.009 [2024-11-20 18:29:24.618150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.328 ms 00:20:06.009 [2024-11-20 18:29:24.618156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.009 [2024-11-20 18:29:24.618215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.009 [2024-11-20 18:29:24.618224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:06.009 [2024-11-20 18:29:24.618230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:06.009 [2024-11-20 18:29:24.618235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.009 [2024-11-20 18:29:24.622495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.009 [2024-11-20 18:29:24.622518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:06.009 [2024-11-20 18:29:24.622525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.232 ms 00:20:06.009 [2024-11-20 18:29:24.622530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.009 [2024-11-20 18:29:24.622603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.009 [2024-11-20 18:29:24.622610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:06.009 [2024-11-20 18:29:24.622617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:20:06.009 [2024-11-20 18:29:24.622622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.009 [2024-11-20 18:29:24.622638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.009 [2024-11-20 18:29:24.622645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:06.009 [2024-11-20 18:29:24.622651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:06.009 [2024-11-20 18:29:24.622657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.009 [2024-11-20 18:29:24.622673] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:06.009 [2024-11-20 18:29:24.625275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.009 [2024-11-20 18:29:24.625375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:06.009 [2024-11-20 18:29:24.625387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.605 ms 00:20:06.009 [2024-11-20 18:29:24.625393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.009 [2024-11-20 18:29:24.625422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.009 [2024-11-20 18:29:24.625428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:06.009 [2024-11-20 18:29:24.625435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:06.009 [2024-11-20 18:29:24.625440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.009 [2024-11-20 18:29:24.625454] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:06.009 [2024-11-20 18:29:24.625470] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:06.010 [2024-11-20 18:29:24.625496] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:06.010 [2024-11-20 18:29:24.625508] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:06.010 [2024-11-20 18:29:24.625586] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:06.010 [2024-11-20 18:29:24.625594] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:06.010 [2024-11-20 18:29:24.625602] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:06.010 [2024-11-20 18:29:24.625609] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:06.010 [2024-11-20 18:29:24.625619] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:06.010 [2024-11-20 18:29:24.625625] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:06.010 [2024-11-20 18:29:24.625630] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:06.010 [2024-11-20 18:29:24.625636] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:06.010 [2024-11-20 18:29:24.625641] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:06.010 [2024-11-20 18:29:24.625646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.010 [2024-11-20 18:29:24.625652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:06.010 [2024-11-20 18:29:24.625658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.194 ms 00:20:06.010 [2024-11-20 18:29:24.625663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.010 [2024-11-20 18:29:24.625730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.010 [2024-11-20 18:29:24.625736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:06.010 [2024-11-20 18:29:24.625744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:20:06.010 [2024-11-20 18:29:24.625750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.010 [2024-11-20 18:29:24.625825] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:06.010 [2024-11-20 18:29:24.625833] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:06.010 [2024-11-20 18:29:24.625839] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:06.010 [2024-11-20 18:29:24.625845] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:06.010 [2024-11-20 18:29:24.625851] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:06.010 [2024-11-20 18:29:24.625856] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:06.010 [2024-11-20 18:29:24.625861] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:06.010 [2024-11-20 18:29:24.625866] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:06.010 [2024-11-20 18:29:24.625872] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:06.010 [2024-11-20 18:29:24.625877] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:06.010 [2024-11-20 18:29:24.625882] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:06.010 [2024-11-20 18:29:24.625888] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:06.010 [2024-11-20 18:29:24.625893] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:06.010 [2024-11-20 18:29:24.625902] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:06.010 [2024-11-20 18:29:24.625907] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:06.010 [2024-11-20 18:29:24.625912] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:06.010 [2024-11-20 18:29:24.625917] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:06.010 [2024-11-20 18:29:24.625923] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:06.010 [2024-11-20 18:29:24.625928] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:06.010 [2024-11-20 18:29:24.625933] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:06.010 [2024-11-20 18:29:24.625938] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:06.010 [2024-11-20 18:29:24.625943] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:06.010 [2024-11-20 18:29:24.625948] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:06.010 [2024-11-20 18:29:24.625953] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:06.010 [2024-11-20 18:29:24.625958] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:06.010 [2024-11-20 18:29:24.625962] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:06.010 [2024-11-20 18:29:24.625967] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:06.010 [2024-11-20 18:29:24.625972] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:06.010 [2024-11-20 18:29:24.625977] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:06.010 [2024-11-20 18:29:24.625983] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:06.010 [2024-11-20 18:29:24.625987] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:06.010 [2024-11-20 18:29:24.625992] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:06.010 [2024-11-20 18:29:24.625997] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:06.010 [2024-11-20 18:29:24.626001] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:06.010 [2024-11-20 18:29:24.626006] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:06.010 [2024-11-20 18:29:24.626011] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:06.010 [2024-11-20 18:29:24.626016] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:06.010 [2024-11-20 18:29:24.626021] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:06.010 [2024-11-20 18:29:24.626026] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:06.010 [2024-11-20 18:29:24.626031] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:06.010 [2024-11-20 18:29:24.626036] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:06.010 [2024-11-20 18:29:24.626040] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:06.010 [2024-11-20 18:29:24.626045] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:06.010 [2024-11-20 18:29:24.626050] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:06.010 [2024-11-20 18:29:24.626056] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:06.010 [2024-11-20 18:29:24.626062] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:06.010 [2024-11-20 18:29:24.626069] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:06.010 [2024-11-20 18:29:24.626074] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:06.010 [2024-11-20 18:29:24.626080] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:06.010 [2024-11-20 18:29:24.626085] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:06.010 [2024-11-20 18:29:24.626090] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:06.010 [2024-11-20 18:29:24.626113] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:06.010 [2024-11-20 18:29:24.626118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:06.010 [2024-11-20 18:29:24.626125] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:06.010 [2024-11-20 18:29:24.626132] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:06.010 [2024-11-20 18:29:24.626139] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:06.010 [2024-11-20 18:29:24.626144] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:06.010 [2024-11-20 18:29:24.626150] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:06.010 [2024-11-20 18:29:24.626155] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:06.010 [2024-11-20 18:29:24.626161] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:06.010 [2024-11-20 18:29:24.626167] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:06.010 [2024-11-20 18:29:24.626172] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:06.010 [2024-11-20 18:29:24.626178] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:06.010 [2024-11-20 18:29:24.626183] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:06.010 [2024-11-20 18:29:24.626189] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:06.010 [2024-11-20 18:29:24.626194] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:06.010 [2024-11-20 18:29:24.626200] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:06.010 [2024-11-20 18:29:24.626205] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:06.010 [2024-11-20 18:29:24.626212] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:06.010 [2024-11-20 18:29:24.626217] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:06.010 [2024-11-20 18:29:24.626223] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:06.010 [2024-11-20 18:29:24.626230] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:06.010 [2024-11-20 18:29:24.626236] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:06.010 [2024-11-20 18:29:24.626241] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:06.010 [2024-11-20 18:29:24.626247] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:06.010 [2024-11-20 18:29:24.626253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.010 [2024-11-20 18:29:24.626258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:06.010 [2024-11-20 18:29:24.626266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.479 ms 00:20:06.011 [2024-11-20 18:29:24.626271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.269 [2024-11-20 18:29:24.646944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.269 [2024-11-20 18:29:24.646971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:06.269 [2024-11-20 18:29:24.646979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.635 ms 00:20:06.269 [2024-11-20 18:29:24.646985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.269 [2024-11-20 18:29:24.647077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.269 [2024-11-20 18:29:24.647088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:06.269 [2024-11-20 18:29:24.647108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:20:06.269 [2024-11-20 18:29:24.647115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.269 [2024-11-20 18:29:24.690358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.269 [2024-11-20 18:29:24.690389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:06.269 [2024-11-20 18:29:24.690400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.227 ms 00:20:06.269 [2024-11-20 18:29:24.690408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.269 [2024-11-20 18:29:24.690468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.269 [2024-11-20 18:29:24.690477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:06.269 [2024-11-20 18:29:24.690485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:20:06.269 [2024-11-20 18:29:24.690490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.269 [2024-11-20 18:29:24.690762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.269 [2024-11-20 18:29:24.690774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:06.269 [2024-11-20 18:29:24.690782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.257 ms 00:20:06.269 [2024-11-20 18:29:24.690787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.269 [2024-11-20 18:29:24.690898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.269 [2024-11-20 18:29:24.690906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:06.269 [2024-11-20 18:29:24.690913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:20:06.269 [2024-11-20 18:29:24.690919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.269 [2024-11-20 18:29:24.701642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.269 [2024-11-20 18:29:24.701747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:06.269 [2024-11-20 18:29:24.701759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.708 ms 00:20:06.269 [2024-11-20 18:29:24.701765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.269 [2024-11-20 18:29:24.711544] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:20:06.269 [2024-11-20 18:29:24.711572] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:06.269 [2024-11-20 18:29:24.711581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.269 [2024-11-20 18:29:24.711587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:06.269 [2024-11-20 18:29:24.711594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.727 ms 00:20:06.269 [2024-11-20 18:29:24.711600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.269 [2024-11-20 18:29:24.729864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.269 [2024-11-20 18:29:24.729897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:06.269 [2024-11-20 18:29:24.729911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.219 ms 00:20:06.270 [2024-11-20 18:29:24.729917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.270 [2024-11-20 18:29:24.738665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.270 [2024-11-20 18:29:24.738689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:06.270 [2024-11-20 18:29:24.738697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.696 ms 00:20:06.270 [2024-11-20 18:29:24.738702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.270 [2024-11-20 18:29:24.747422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.270 [2024-11-20 18:29:24.747517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:06.270 [2024-11-20 18:29:24.747529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.680 ms 00:20:06.270 [2024-11-20 18:29:24.747534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.270 [2024-11-20 18:29:24.747985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.270 [2024-11-20 18:29:24.748002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:06.270 [2024-11-20 18:29:24.748009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.393 ms 00:20:06.270 [2024-11-20 18:29:24.748015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.270 [2024-11-20 18:29:24.791199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.270 [2024-11-20 18:29:24.791234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:06.270 [2024-11-20 18:29:24.791245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.167 ms 00:20:06.270 [2024-11-20 18:29:24.791251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.270 [2024-11-20 18:29:24.798937] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:06.270 [2024-11-20 18:29:24.810508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.270 [2024-11-20 18:29:24.810535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:06.270 [2024-11-20 18:29:24.810545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.192 ms 00:20:06.270 [2024-11-20 18:29:24.810552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.270 [2024-11-20 18:29:24.810628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.270 [2024-11-20 18:29:24.810638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:06.270 [2024-11-20 18:29:24.810645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:06.270 [2024-11-20 18:29:24.810651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.270 [2024-11-20 18:29:24.810686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.270 [2024-11-20 18:29:24.810692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:06.270 [2024-11-20 18:29:24.810698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:20:06.270 [2024-11-20 18:29:24.810704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.270 [2024-11-20 18:29:24.810725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.270 [2024-11-20 18:29:24.810732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:06.270 [2024-11-20 18:29:24.810740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:06.270 [2024-11-20 18:29:24.810746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.270 [2024-11-20 18:29:24.810768] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:06.270 [2024-11-20 18:29:24.810775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.270 [2024-11-20 18:29:24.810781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:06.270 [2024-11-20 18:29:24.810787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:06.270 [2024-11-20 18:29:24.810792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.270 [2024-11-20 18:29:24.828515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.270 [2024-11-20 18:29:24.828553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:06.270 [2024-11-20 18:29:24.828562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.708 ms 00:20:06.270 [2024-11-20 18:29:24.828568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.270 [2024-11-20 18:29:24.828637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.270 [2024-11-20 18:29:24.828645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:06.270 [2024-11-20 18:29:24.828651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:20:06.270 [2024-11-20 18:29:24.828657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.270 [2024-11-20 18:29:24.829782] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:06.270 [2024-11-20 18:29:24.832193] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 224.656 ms, result 0 00:20:06.270 [2024-11-20 18:29:24.832931] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:06.270 [2024-11-20 18:29:24.843927] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:07.288  [2024-11-20T18:29:26.851Z] Copying: 50/256 [MB] (50 MBps) [2024-11-20T18:29:28.235Z] Copying: 99/256 [MB] (48 MBps) [2024-11-20T18:29:29.183Z] Copying: 120/256 [MB] (21 MBps) [2024-11-20T18:29:30.126Z] Copying: 136/256 [MB] (16 MBps) [2024-11-20T18:29:31.070Z] Copying: 154/256 [MB] (17 MBps) [2024-11-20T18:29:32.012Z] Copying: 171/256 [MB] (17 MBps) [2024-11-20T18:29:32.952Z] Copying: 196/256 [MB] (25 MBps) [2024-11-20T18:29:33.896Z] Copying: 222/256 [MB] (25 MBps) [2024-11-20T18:29:34.467Z] Copying: 243/256 [MB] (20 MBps) [2024-11-20T18:29:34.467Z] Copying: 256/256 [MB] (average 26 MBps)[2024-11-20 18:29:34.430487] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:15.838 [2024-11-20 18:29:34.441008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.838 [2024-11-20 18:29:34.441062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:15.838 [2024-11-20 18:29:34.441077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:15.838 [2024-11-20 18:29:34.441086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.838 [2024-11-20 18:29:34.441141] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:15.838 [2024-11-20 18:29:34.444124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.838 [2024-11-20 18:29:34.444174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:15.838 [2024-11-20 18:29:34.444186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.967 ms 00:20:15.838 [2024-11-20 18:29:34.444194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.838 [2024-11-20 18:29:34.447248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.838 [2024-11-20 18:29:34.447295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:15.838 [2024-11-20 18:29:34.447306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.026 ms 00:20:15.838 [2024-11-20 18:29:34.447315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.838 [2024-11-20 18:29:34.456404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.838 [2024-11-20 18:29:34.456451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:15.838 [2024-11-20 18:29:34.456470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.070 ms 00:20:15.838 [2024-11-20 18:29:34.456479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.838 [2024-11-20 18:29:34.463709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.838 [2024-11-20 18:29:34.463746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:15.838 [2024-11-20 18:29:34.463757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.184 ms 00:20:15.838 [2024-11-20 18:29:34.463765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.100 [2024-11-20 18:29:34.487519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.100 [2024-11-20 18:29:34.487550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:16.100 [2024-11-20 18:29:34.487560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.693 ms 00:20:16.100 [2024-11-20 18:29:34.487567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.100 [2024-11-20 18:29:34.501348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.100 [2024-11-20 18:29:34.501387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:16.100 [2024-11-20 18:29:34.501404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.746 ms 00:20:16.100 [2024-11-20 18:29:34.501415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.100 [2024-11-20 18:29:34.501545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.100 [2024-11-20 18:29:34.501555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:16.100 [2024-11-20 18:29:34.501563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:20:16.100 [2024-11-20 18:29:34.501570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.100 [2024-11-20 18:29:34.525047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.100 [2024-11-20 18:29:34.525078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:16.100 [2024-11-20 18:29:34.525089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.461 ms 00:20:16.100 [2024-11-20 18:29:34.525113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.100 [2024-11-20 18:29:34.548554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.100 [2024-11-20 18:29:34.548708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:16.100 [2024-11-20 18:29:34.548726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.404 ms 00:20:16.100 [2024-11-20 18:29:34.548733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.100 [2024-11-20 18:29:34.572084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.100 [2024-11-20 18:29:34.572129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:16.100 [2024-11-20 18:29:34.572139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.177 ms 00:20:16.100 [2024-11-20 18:29:34.572146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.100 [2024-11-20 18:29:34.595773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.100 [2024-11-20 18:29:34.595920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:16.100 [2024-11-20 18:29:34.595937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.562 ms 00:20:16.100 [2024-11-20 18:29:34.595944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.100 [2024-11-20 18:29:34.595978] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:16.100 [2024-11-20 18:29:34.595997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:16.100 [2024-11-20 18:29:34.596007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:16.100 [2024-11-20 18:29:34.596015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:16.100 [2024-11-20 18:29:34.596023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:16.100 [2024-11-20 18:29:34.596030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:16.100 [2024-11-20 18:29:34.596037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:16.100 [2024-11-20 18:29:34.596045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:16.100 [2024-11-20 18:29:34.596052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:16.100 [2024-11-20 18:29:34.596059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:16.100 [2024-11-20 18:29:34.596066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:16.100 [2024-11-20 18:29:34.596073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:16.100 [2024-11-20 18:29:34.596081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:16.100 [2024-11-20 18:29:34.596088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:16.100 [2024-11-20 18:29:34.596113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:16.100 [2024-11-20 18:29:34.596120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:16.100 [2024-11-20 18:29:34.596128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:16.100 [2024-11-20 18:29:34.596136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:16.100 [2024-11-20 18:29:34.596143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:16.100 [2024-11-20 18:29:34.596151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:16.100 [2024-11-20 18:29:34.596158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:16.100 [2024-11-20 18:29:34.596165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:16.100 [2024-11-20 18:29:34.596173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:16.100 [2024-11-20 18:29:34.596180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:16.100 [2024-11-20 18:29:34.596187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:16.100 [2024-11-20 18:29:34.596195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:16.100 [2024-11-20 18:29:34.596202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:16.100 [2024-11-20 18:29:34.596210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:16.100 [2024-11-20 18:29:34.596219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:16.100 [2024-11-20 18:29:34.596226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:16.100 [2024-11-20 18:29:34.596236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:16.100 [2024-11-20 18:29:34.596244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:16.100 [2024-11-20 18:29:34.596251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:16.100 [2024-11-20 18:29:34.596259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:16.100 [2024-11-20 18:29:34.596266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:16.100 [2024-11-20 18:29:34.596274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:16.100 [2024-11-20 18:29:34.596281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:16.100 [2024-11-20 18:29:34.596289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:16.100 [2024-11-20 18:29:34.596297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:16.100 [2024-11-20 18:29:34.596304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:16.100 [2024-11-20 18:29:34.596312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:16.100 [2024-11-20 18:29:34.596319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:16.100 [2024-11-20 18:29:34.596327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:16.100 [2024-11-20 18:29:34.596334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:16.100 [2024-11-20 18:29:34.596341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:16.100 [2024-11-20 18:29:34.596349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:16.100 [2024-11-20 18:29:34.596356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:16.100 [2024-11-20 18:29:34.596364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:16.100 [2024-11-20 18:29:34.596371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:16.100 [2024-11-20 18:29:34.596379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:16.100 [2024-11-20 18:29:34.596387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:16.100 [2024-11-20 18:29:34.596394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:16.100 [2024-11-20 18:29:34.596402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:16.100 [2024-11-20 18:29:34.596410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:16.100 [2024-11-20 18:29:34.596417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:16.100 [2024-11-20 18:29:34.596425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:16.100 [2024-11-20 18:29:34.596432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:16.100 [2024-11-20 18:29:34.596441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:16.100 [2024-11-20 18:29:34.596448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:16.100 [2024-11-20 18:29:34.596456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:16.100 [2024-11-20 18:29:34.596463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:16.100 [2024-11-20 18:29:34.596470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:16.100 [2024-11-20 18:29:34.596479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:16.101 [2024-11-20 18:29:34.596487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:16.101 [2024-11-20 18:29:34.596494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:16.101 [2024-11-20 18:29:34.596502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:16.101 [2024-11-20 18:29:34.596509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:16.101 [2024-11-20 18:29:34.596516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:16.101 [2024-11-20 18:29:34.596523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:16.101 [2024-11-20 18:29:34.596531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:16.101 [2024-11-20 18:29:34.596538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:16.101 [2024-11-20 18:29:34.596545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:16.101 [2024-11-20 18:29:34.596553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:16.101 [2024-11-20 18:29:34.596560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:16.101 [2024-11-20 18:29:34.596567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:16.101 [2024-11-20 18:29:34.596575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:16.101 [2024-11-20 18:29:34.596582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:16.101 [2024-11-20 18:29:34.596589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:16.101 [2024-11-20 18:29:34.596596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:16.101 [2024-11-20 18:29:34.596604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:16.101 [2024-11-20 18:29:34.596611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:16.101 [2024-11-20 18:29:34.596619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:16.101 [2024-11-20 18:29:34.596626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:16.101 [2024-11-20 18:29:34.596633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:16.101 [2024-11-20 18:29:34.596640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:16.101 [2024-11-20 18:29:34.596647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:16.101 [2024-11-20 18:29:34.596655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:16.101 [2024-11-20 18:29:34.596663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:16.101 [2024-11-20 18:29:34.596670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:16.101 [2024-11-20 18:29:34.596678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:16.101 [2024-11-20 18:29:34.596685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:16.101 [2024-11-20 18:29:34.596710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:16.101 [2024-11-20 18:29:34.596717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:16.101 [2024-11-20 18:29:34.596724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:16.101 [2024-11-20 18:29:34.596733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:16.101 [2024-11-20 18:29:34.596740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:16.101 [2024-11-20 18:29:34.596748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:16.101 [2024-11-20 18:29:34.596763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:16.101 [2024-11-20 18:29:34.596771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:16.101 [2024-11-20 18:29:34.596778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:16.101 [2024-11-20 18:29:34.596786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:16.101 [2024-11-20 18:29:34.596802] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:16.101 [2024-11-20 18:29:34.596810] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: cbb96cf1-d64b-463e-90b3-2025057ee758 00:20:16.101 [2024-11-20 18:29:34.596817] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:16.101 [2024-11-20 18:29:34.596825] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:16.101 [2024-11-20 18:29:34.596832] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:16.101 [2024-11-20 18:29:34.596840] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:16.101 [2024-11-20 18:29:34.596847] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:16.101 [2024-11-20 18:29:34.596855] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:16.101 [2024-11-20 18:29:34.596863] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:16.101 [2024-11-20 18:29:34.596869] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:16.101 [2024-11-20 18:29:34.596876] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:16.101 [2024-11-20 18:29:34.596883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.101 [2024-11-20 18:29:34.596890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:16.101 [2024-11-20 18:29:34.596902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.906 ms 00:20:16.101 [2024-11-20 18:29:34.596910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.101 [2024-11-20 18:29:34.610138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.101 [2024-11-20 18:29:34.610293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:16.101 [2024-11-20 18:29:34.610312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.197 ms 00:20:16.101 [2024-11-20 18:29:34.610320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.101 [2024-11-20 18:29:34.610708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.101 [2024-11-20 18:29:34.610732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:16.101 [2024-11-20 18:29:34.610742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.349 ms 00:20:16.101 [2024-11-20 18:29:34.610749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.101 [2024-11-20 18:29:34.649170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.101 [2024-11-20 18:29:34.649336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:16.101 [2024-11-20 18:29:34.649355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.101 [2024-11-20 18:29:34.649363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.101 [2024-11-20 18:29:34.649472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.101 [2024-11-20 18:29:34.649485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:16.101 [2024-11-20 18:29:34.649494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.101 [2024-11-20 18:29:34.649501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.101 [2024-11-20 18:29:34.649554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.101 [2024-11-20 18:29:34.649564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:16.101 [2024-11-20 18:29:34.649573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.101 [2024-11-20 18:29:34.649580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.101 [2024-11-20 18:29:34.649598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.101 [2024-11-20 18:29:34.649606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:16.101 [2024-11-20 18:29:34.649617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.101 [2024-11-20 18:29:34.649624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.363 [2024-11-20 18:29:34.734887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.363 [2024-11-20 18:29:34.734945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:16.363 [2024-11-20 18:29:34.734959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.363 [2024-11-20 18:29:34.734968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.363 [2024-11-20 18:29:34.804531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.363 [2024-11-20 18:29:34.804761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:16.363 [2024-11-20 18:29:34.804787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.363 [2024-11-20 18:29:34.804796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.363 [2024-11-20 18:29:34.804875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.363 [2024-11-20 18:29:34.804886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:16.363 [2024-11-20 18:29:34.804895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.363 [2024-11-20 18:29:34.804904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.363 [2024-11-20 18:29:34.804937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.363 [2024-11-20 18:29:34.804946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:16.363 [2024-11-20 18:29:34.804955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.363 [2024-11-20 18:29:34.804967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.363 [2024-11-20 18:29:34.805072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.363 [2024-11-20 18:29:34.805083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:16.363 [2024-11-20 18:29:34.805091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.363 [2024-11-20 18:29:34.805130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.363 [2024-11-20 18:29:34.805167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.363 [2024-11-20 18:29:34.805178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:16.363 [2024-11-20 18:29:34.805187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.363 [2024-11-20 18:29:34.805195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.363 [2024-11-20 18:29:34.805243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.363 [2024-11-20 18:29:34.805253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:16.363 [2024-11-20 18:29:34.805263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.363 [2024-11-20 18:29:34.805270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.363 [2024-11-20 18:29:34.805321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.363 [2024-11-20 18:29:34.805332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:16.363 [2024-11-20 18:29:34.805342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.363 [2024-11-20 18:29:34.805353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.363 [2024-11-20 18:29:34.805512] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 364.493 ms, result 0 00:20:17.305 00:20:17.305 00:20:17.305 18:29:35 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=76540 00:20:17.305 18:29:35 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:20:17.305 18:29:35 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 76540 00:20:17.305 18:29:35 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 76540 ']' 00:20:17.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:17.305 18:29:35 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:17.305 18:29:35 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:17.305 18:29:35 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:17.305 18:29:35 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:17.305 18:29:35 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:17.566 [2024-11-20 18:29:35.963587] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:20:17.566 [2024-11-20 18:29:35.963914] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76540 ] 00:20:17.566 [2024-11-20 18:29:36.117149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:17.827 [2024-11-20 18:29:36.243300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:18.399 18:29:36 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:18.399 18:29:36 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:20:18.399 18:29:36 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:20:18.659 [2024-11-20 18:29:37.148409] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:18.659 [2024-11-20 18:29:37.148485] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:18.924 [2024-11-20 18:29:37.327773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.924 [2024-11-20 18:29:37.327834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:18.924 [2024-11-20 18:29:37.327852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:18.924 [2024-11-20 18:29:37.327861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.924 [2024-11-20 18:29:37.330914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.924 [2024-11-20 18:29:37.330965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:18.924 [2024-11-20 18:29:37.330978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.030 ms 00:20:18.924 [2024-11-20 18:29:37.330987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.924 [2024-11-20 18:29:37.331128] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:18.924 [2024-11-20 18:29:37.331872] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:18.924 [2024-11-20 18:29:37.332039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.924 [2024-11-20 18:29:37.332053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:18.924 [2024-11-20 18:29:37.332064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.921 ms 00:20:18.924 [2024-11-20 18:29:37.332073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.924 [2024-11-20 18:29:37.334328] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:18.924 [2024-11-20 18:29:37.348706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.924 [2024-11-20 18:29:37.348764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:18.924 [2024-11-20 18:29:37.348780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.387 ms 00:20:18.924 [2024-11-20 18:29:37.348790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.924 [2024-11-20 18:29:37.348905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.924 [2024-11-20 18:29:37.348919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:18.924 [2024-11-20 18:29:37.348929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:20:18.924 [2024-11-20 18:29:37.348939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.925 [2024-11-20 18:29:37.357085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.925 [2024-11-20 18:29:37.357150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:18.925 [2024-11-20 18:29:37.357161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.093 ms 00:20:18.925 [2024-11-20 18:29:37.357171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.925 [2024-11-20 18:29:37.357287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.925 [2024-11-20 18:29:37.357301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:18.925 [2024-11-20 18:29:37.357310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:20:18.925 [2024-11-20 18:29:37.357320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.925 [2024-11-20 18:29:37.357351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.925 [2024-11-20 18:29:37.357361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:18.925 [2024-11-20 18:29:37.357369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:18.925 [2024-11-20 18:29:37.357379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.925 [2024-11-20 18:29:37.357403] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:18.925 [2024-11-20 18:29:37.361593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.925 [2024-11-20 18:29:37.361629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:18.925 [2024-11-20 18:29:37.361644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.194 ms 00:20:18.925 [2024-11-20 18:29:37.361651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.925 [2024-11-20 18:29:37.361729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.925 [2024-11-20 18:29:37.361738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:18.925 [2024-11-20 18:29:37.361750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:18.925 [2024-11-20 18:29:37.361761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.925 [2024-11-20 18:29:37.361785] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:18.925 [2024-11-20 18:29:37.361807] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:18.925 [2024-11-20 18:29:37.361851] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:18.925 [2024-11-20 18:29:37.361868] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:18.925 [2024-11-20 18:29:37.361976] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:18.925 [2024-11-20 18:29:37.361988] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:18.925 [2024-11-20 18:29:37.362003] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:18.925 [2024-11-20 18:29:37.362016] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:18.925 [2024-11-20 18:29:37.362028] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:18.925 [2024-11-20 18:29:37.362036] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:18.925 [2024-11-20 18:29:37.362046] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:18.925 [2024-11-20 18:29:37.362054] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:18.925 [2024-11-20 18:29:37.362065] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:18.925 [2024-11-20 18:29:37.362074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.925 [2024-11-20 18:29:37.362084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:18.925 [2024-11-20 18:29:37.362113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.293 ms 00:20:18.925 [2024-11-20 18:29:37.362124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.925 [2024-11-20 18:29:37.362214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.925 [2024-11-20 18:29:37.362244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:18.925 [2024-11-20 18:29:37.362254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:18.925 [2024-11-20 18:29:37.362285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.925 [2024-11-20 18:29:37.362387] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:18.925 [2024-11-20 18:29:37.362399] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:18.925 [2024-11-20 18:29:37.362408] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:18.925 [2024-11-20 18:29:37.362417] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:18.925 [2024-11-20 18:29:37.362425] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:18.925 [2024-11-20 18:29:37.362434] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:18.925 [2024-11-20 18:29:37.362440] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:18.925 [2024-11-20 18:29:37.362453] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:18.925 [2024-11-20 18:29:37.362460] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:18.925 [2024-11-20 18:29:37.362469] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:18.925 [2024-11-20 18:29:37.362476] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:18.925 [2024-11-20 18:29:37.362484] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:18.925 [2024-11-20 18:29:37.362490] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:18.925 [2024-11-20 18:29:37.362499] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:18.925 [2024-11-20 18:29:37.362506] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:18.925 [2024-11-20 18:29:37.362516] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:18.925 [2024-11-20 18:29:37.362523] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:18.925 [2024-11-20 18:29:37.362533] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:18.925 [2024-11-20 18:29:37.362540] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:18.925 [2024-11-20 18:29:37.362549] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:18.925 [2024-11-20 18:29:37.362562] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:18.925 [2024-11-20 18:29:37.362571] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:18.925 [2024-11-20 18:29:37.362578] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:18.925 [2024-11-20 18:29:37.362588] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:18.925 [2024-11-20 18:29:37.362594] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:18.925 [2024-11-20 18:29:37.362603] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:18.925 [2024-11-20 18:29:37.362610] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:18.925 [2024-11-20 18:29:37.362619] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:18.925 [2024-11-20 18:29:37.362626] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:18.925 [2024-11-20 18:29:37.362634] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:18.925 [2024-11-20 18:29:37.362640] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:18.925 [2024-11-20 18:29:37.362649] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:18.925 [2024-11-20 18:29:37.362655] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:18.925 [2024-11-20 18:29:37.362665] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:18.925 [2024-11-20 18:29:37.362672] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:18.925 [2024-11-20 18:29:37.362681] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:18.925 [2024-11-20 18:29:37.362688] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:18.925 [2024-11-20 18:29:37.362696] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:18.925 [2024-11-20 18:29:37.362703] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:18.925 [2024-11-20 18:29:37.362712] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:18.925 [2024-11-20 18:29:37.362719] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:18.925 [2024-11-20 18:29:37.362728] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:18.925 [2024-11-20 18:29:37.362734] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:18.925 [2024-11-20 18:29:37.362743] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:18.925 [2024-11-20 18:29:37.362752] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:18.925 [2024-11-20 18:29:37.362762] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:18.925 [2024-11-20 18:29:37.362770] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:18.925 [2024-11-20 18:29:37.362782] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:18.925 [2024-11-20 18:29:37.362791] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:18.925 [2024-11-20 18:29:37.362800] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:18.925 [2024-11-20 18:29:37.362807] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:18.925 [2024-11-20 18:29:37.362818] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:18.925 [2024-11-20 18:29:37.362824] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:18.925 [2024-11-20 18:29:37.362835] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:18.925 [2024-11-20 18:29:37.362845] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:18.925 [2024-11-20 18:29:37.362857] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:18.925 [2024-11-20 18:29:37.362866] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:18.925 [2024-11-20 18:29:37.362877] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:18.925 [2024-11-20 18:29:37.362884] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:18.925 [2024-11-20 18:29:37.362895] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:18.926 [2024-11-20 18:29:37.362903] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:18.926 [2024-11-20 18:29:37.362912] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:18.926 [2024-11-20 18:29:37.362919] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:18.926 [2024-11-20 18:29:37.362928] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:18.926 [2024-11-20 18:29:37.362935] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:18.926 [2024-11-20 18:29:37.362944] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:18.926 [2024-11-20 18:29:37.362951] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:18.926 [2024-11-20 18:29:37.362959] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:18.926 [2024-11-20 18:29:37.362967] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:18.926 [2024-11-20 18:29:37.362975] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:18.926 [2024-11-20 18:29:37.362984] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:18.926 [2024-11-20 18:29:37.362996] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:18.926 [2024-11-20 18:29:37.363003] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:18.926 [2024-11-20 18:29:37.363012] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:18.926 [2024-11-20 18:29:37.363020] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:18.926 [2024-11-20 18:29:37.363029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.926 [2024-11-20 18:29:37.363036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:18.926 [2024-11-20 18:29:37.363045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.708 ms 00:20:18.926 [2024-11-20 18:29:37.363053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.926 [2024-11-20 18:29:37.395058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.926 [2024-11-20 18:29:37.395126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:18.926 [2024-11-20 18:29:37.395140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.926 ms 00:20:18.926 [2024-11-20 18:29:37.395149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.926 [2024-11-20 18:29:37.395288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.926 [2024-11-20 18:29:37.395299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:18.926 [2024-11-20 18:29:37.395310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:18.926 [2024-11-20 18:29:37.395318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.926 [2024-11-20 18:29:37.430289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.926 [2024-11-20 18:29:37.430330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:18.926 [2024-11-20 18:29:37.430348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.945 ms 00:20:18.926 [2024-11-20 18:29:37.430356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.926 [2024-11-20 18:29:37.430451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.926 [2024-11-20 18:29:37.430461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:18.926 [2024-11-20 18:29:37.430472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:18.926 [2024-11-20 18:29:37.430480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.926 [2024-11-20 18:29:37.431030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.926 [2024-11-20 18:29:37.431072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:18.926 [2024-11-20 18:29:37.431087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.523 ms 00:20:18.926 [2024-11-20 18:29:37.431118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.926 [2024-11-20 18:29:37.431276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.926 [2024-11-20 18:29:37.431285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:18.926 [2024-11-20 18:29:37.431297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:20:18.926 [2024-11-20 18:29:37.431305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.926 [2024-11-20 18:29:37.449218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.926 [2024-11-20 18:29:37.449405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:18.926 [2024-11-20 18:29:37.449429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.885 ms 00:20:18.926 [2024-11-20 18:29:37.449437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.926 [2024-11-20 18:29:37.463989] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:18.926 [2024-11-20 18:29:37.464036] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:18.926 [2024-11-20 18:29:37.464052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.926 [2024-11-20 18:29:37.464061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:18.926 [2024-11-20 18:29:37.464073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.499 ms 00:20:18.926 [2024-11-20 18:29:37.464080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.926 [2024-11-20 18:29:37.490357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.926 [2024-11-20 18:29:37.490408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:18.926 [2024-11-20 18:29:37.490424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.162 ms 00:20:18.926 [2024-11-20 18:29:37.490432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.926 [2024-11-20 18:29:37.503220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.926 [2024-11-20 18:29:37.503411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:18.926 [2024-11-20 18:29:37.503441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.687 ms 00:20:18.926 [2024-11-20 18:29:37.503449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.926 [2024-11-20 18:29:37.516020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.926 [2024-11-20 18:29:37.516062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:18.926 [2024-11-20 18:29:37.516077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.487 ms 00:20:18.926 [2024-11-20 18:29:37.516085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.926 [2024-11-20 18:29:37.516794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.926 [2024-11-20 18:29:37.516831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:18.926 [2024-11-20 18:29:37.516844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.573 ms 00:20:18.926 [2024-11-20 18:29:37.516852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.188 [2024-11-20 18:29:37.592714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.188 [2024-11-20 18:29:37.592788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:19.188 [2024-11-20 18:29:37.592810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 75.828 ms 00:20:19.188 [2024-11-20 18:29:37.592820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.188 [2024-11-20 18:29:37.604317] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:19.188 [2024-11-20 18:29:37.623862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.188 [2024-11-20 18:29:37.623924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:19.188 [2024-11-20 18:29:37.623942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.927 ms 00:20:19.188 [2024-11-20 18:29:37.623952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.188 [2024-11-20 18:29:37.624053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.188 [2024-11-20 18:29:37.624066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:19.188 [2024-11-20 18:29:37.624076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:19.188 [2024-11-20 18:29:37.624086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.188 [2024-11-20 18:29:37.624177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.188 [2024-11-20 18:29:37.624190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:19.188 [2024-11-20 18:29:37.624198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:20:19.188 [2024-11-20 18:29:37.624209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.188 [2024-11-20 18:29:37.624237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.188 [2024-11-20 18:29:37.624248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:19.188 [2024-11-20 18:29:37.624257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:19.188 [2024-11-20 18:29:37.624271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.188 [2024-11-20 18:29:37.624307] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:19.188 [2024-11-20 18:29:37.624323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.188 [2024-11-20 18:29:37.624330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:19.188 [2024-11-20 18:29:37.624345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:19.188 [2024-11-20 18:29:37.624352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.188 [2024-11-20 18:29:37.650570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.188 [2024-11-20 18:29:37.650743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:19.188 [2024-11-20 18:29:37.650771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.185 ms 00:20:19.188 [2024-11-20 18:29:37.650780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.188 [2024-11-20 18:29:37.650894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.188 [2024-11-20 18:29:37.650905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:19.188 [2024-11-20 18:29:37.650918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:20:19.188 [2024-11-20 18:29:37.650929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.188 [2024-11-20 18:29:37.652196] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:19.188 [2024-11-20 18:29:37.655677] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 324.060 ms, result 0 00:20:19.188 [2024-11-20 18:29:37.657655] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:19.188 Some configs were skipped because the RPC state that can call them passed over. 00:20:19.188 18:29:37 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:20:19.448 [2024-11-20 18:29:37.897954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.448 [2024-11-20 18:29:37.898165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:19.448 [2024-11-20 18:29:37.898238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.456 ms 00:20:19.448 [2024-11-20 18:29:37.898266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.448 [2024-11-20 18:29:37.898323] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.825 ms, result 0 00:20:19.448 true 00:20:19.448 18:29:37 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:20:19.708 [2024-11-20 18:29:38.113951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.708 [2024-11-20 18:29:38.114123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:19.708 [2024-11-20 18:29:38.114192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.174 ms 00:20:19.708 [2024-11-20 18:29:38.114217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.708 [2024-11-20 18:29:38.114275] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.500 ms, result 0 00:20:19.708 true 00:20:19.708 18:29:38 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 76540 00:20:19.708 18:29:38 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76540 ']' 00:20:19.708 18:29:38 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76540 00:20:19.708 18:29:38 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:20:19.708 18:29:38 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:19.708 18:29:38 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76540 00:20:19.708 killing process with pid 76540 00:20:19.708 18:29:38 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:19.708 18:29:38 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:19.708 18:29:38 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76540' 00:20:19.708 18:29:38 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 76540 00:20:19.708 18:29:38 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 76540 00:20:20.278 [2024-11-20 18:29:38.891544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.278 [2024-11-20 18:29:38.891625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:20.278 [2024-11-20 18:29:38.891640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:20.278 [2024-11-20 18:29:38.891651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.278 [2024-11-20 18:29:38.891677] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:20.278 [2024-11-20 18:29:38.894892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.278 [2024-11-20 18:29:38.895090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:20.278 [2024-11-20 18:29:38.895134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.193 ms 00:20:20.278 [2024-11-20 18:29:38.895144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.278 [2024-11-20 18:29:38.895492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.278 [2024-11-20 18:29:38.895503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:20.278 [2024-11-20 18:29:38.895515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.282 ms 00:20:20.278 [2024-11-20 18:29:38.895523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.278 [2024-11-20 18:29:38.900225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.278 [2024-11-20 18:29:38.900265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:20.278 [2024-11-20 18:29:38.900281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.677 ms 00:20:20.278 [2024-11-20 18:29:38.900290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.538 [2024-11-20 18:29:38.907274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.538 [2024-11-20 18:29:38.907317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:20.538 [2024-11-20 18:29:38.907331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.931 ms 00:20:20.538 [2024-11-20 18:29:38.907339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.538 [2024-11-20 18:29:38.919262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.538 [2024-11-20 18:29:38.919311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:20.538 [2024-11-20 18:29:38.919328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.611 ms 00:20:20.538 [2024-11-20 18:29:38.919343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.538 [2024-11-20 18:29:38.928329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.538 [2024-11-20 18:29:38.928377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:20.538 [2024-11-20 18:29:38.928393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.926 ms 00:20:20.538 [2024-11-20 18:29:38.928401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.538 [2024-11-20 18:29:38.928556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.538 [2024-11-20 18:29:38.928567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:20.538 [2024-11-20 18:29:38.928579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:20:20.538 [2024-11-20 18:29:38.928587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.538 [2024-11-20 18:29:38.939778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.538 [2024-11-20 18:29:38.939822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:20.538 [2024-11-20 18:29:38.939835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.166 ms 00:20:20.538 [2024-11-20 18:29:38.939842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.538 [2024-11-20 18:29:38.950607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.538 [2024-11-20 18:29:38.950649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:20.538 [2024-11-20 18:29:38.950665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.709 ms 00:20:20.538 [2024-11-20 18:29:38.950672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.538 [2024-11-20 18:29:38.960430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.538 [2024-11-20 18:29:38.960470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:20.538 [2024-11-20 18:29:38.960484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.704 ms 00:20:20.538 [2024-11-20 18:29:38.960489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.538 [2024-11-20 18:29:38.967933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.538 [2024-11-20 18:29:38.967976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:20.538 [2024-11-20 18:29:38.967987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.371 ms 00:20:20.538 [2024-11-20 18:29:38.967993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.538 [2024-11-20 18:29:38.968059] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:20.538 [2024-11-20 18:29:38.968074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:20.538 [2024-11-20 18:29:38.968084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:20.538 [2024-11-20 18:29:38.968090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:20.538 [2024-11-20 18:29:38.968119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:20.538 [2024-11-20 18:29:38.968125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:20.538 [2024-11-20 18:29:38.968136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:20.538 [2024-11-20 18:29:38.968143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:20.538 [2024-11-20 18:29:38.968152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:20.538 [2024-11-20 18:29:38.968157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:20.538 [2024-11-20 18:29:38.968166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:20.538 [2024-11-20 18:29:38.968172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:20.538 [2024-11-20 18:29:38.968180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:20.538 [2024-11-20 18:29:38.968186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:20.538 [2024-11-20 18:29:38.968194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:20.539 [2024-11-20 18:29:38.968200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:20.539 [2024-11-20 18:29:38.968207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:20.539 [2024-11-20 18:29:38.968213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:20.539 [2024-11-20 18:29:38.968220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:20.539 [2024-11-20 18:29:38.968226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:20.539 [2024-11-20 18:29:38.968236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:20.539 [2024-11-20 18:29:38.968242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:20.539 [2024-11-20 18:29:38.968252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:20.539 [2024-11-20 18:29:38.968258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:20.539 [2024-11-20 18:29:38.968266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:20.539 [2024-11-20 18:29:38.968272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:20.539 [2024-11-20 18:29:38.968280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:20.539 [2024-11-20 18:29:38.968286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:20.539 [2024-11-20 18:29:38.968294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:20.539 [2024-11-20 18:29:38.968299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:20.539 [2024-11-20 18:29:38.968307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:20.539 [2024-11-20 18:29:38.968312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:20.539 [2024-11-20 18:29:38.968319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:20.539 [2024-11-20 18:29:38.968325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:20.539 [2024-11-20 18:29:38.968334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:20.539 [2024-11-20 18:29:38.968341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:20.539 [2024-11-20 18:29:38.968348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:20.539 [2024-11-20 18:29:38.968354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:20.539 [2024-11-20 18:29:38.968363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:20.539 [2024-11-20 18:29:38.968369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:20.539 [2024-11-20 18:29:38.968377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:20.539 [2024-11-20 18:29:38.968384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:20.539 [2024-11-20 18:29:38.968391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:20.539 [2024-11-20 18:29:38.968397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:20.539 [2024-11-20 18:29:38.968405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:20.539 [2024-11-20 18:29:38.968411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:20.539 [2024-11-20 18:29:38.968419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:20.539 [2024-11-20 18:29:38.968425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:20.539 [2024-11-20 18:29:38.968432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:20.539 [2024-11-20 18:29:38.968437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:20.539 [2024-11-20 18:29:38.968444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:20.539 [2024-11-20 18:29:38.968450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:20.539 [2024-11-20 18:29:38.968459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:20.539 [2024-11-20 18:29:38.968464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:20.539 [2024-11-20 18:29:38.968475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:20.539 [2024-11-20 18:29:38.968481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:20.539 [2024-11-20 18:29:38.968488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:20.539 [2024-11-20 18:29:38.968494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:20.539 [2024-11-20 18:29:38.968502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:20.539 [2024-11-20 18:29:38.968511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:20.539 [2024-11-20 18:29:38.968518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:20.539 [2024-11-20 18:29:38.968524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:20.539 [2024-11-20 18:29:38.968531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:20.539 [2024-11-20 18:29:38.968536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:20.539 [2024-11-20 18:29:38.968543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:20.539 [2024-11-20 18:29:38.968550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:20.539 [2024-11-20 18:29:38.968558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:20.539 [2024-11-20 18:29:38.968564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:20.539 [2024-11-20 18:29:38.968572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:20.539 [2024-11-20 18:29:38.968577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:20.539 [2024-11-20 18:29:38.968586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:20.539 [2024-11-20 18:29:38.968592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:20.539 [2024-11-20 18:29:38.968601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:20.539 [2024-11-20 18:29:38.968606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:20.539 [2024-11-20 18:29:38.968613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:20.539 [2024-11-20 18:29:38.968619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:20.539 [2024-11-20 18:29:38.968628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:20.539 [2024-11-20 18:29:38.968633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:20.539 [2024-11-20 18:29:38.968641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:20.539 [2024-11-20 18:29:38.968646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:20.539 [2024-11-20 18:29:38.968654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:20.539 [2024-11-20 18:29:38.968659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:20.539 [2024-11-20 18:29:38.968666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:20.539 [2024-11-20 18:29:38.968672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:20.539 [2024-11-20 18:29:38.968680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:20.539 [2024-11-20 18:29:38.968685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:20.539 [2024-11-20 18:29:38.968696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:20.539 [2024-11-20 18:29:38.968713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:20.539 [2024-11-20 18:29:38.968721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:20.539 [2024-11-20 18:29:38.968727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:20.539 [2024-11-20 18:29:38.968735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:20.539 [2024-11-20 18:29:38.968741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:20.539 [2024-11-20 18:29:38.968748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:20.539 [2024-11-20 18:29:38.968755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:20.539 [2024-11-20 18:29:38.968763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:20.539 [2024-11-20 18:29:38.968768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:20.539 [2024-11-20 18:29:38.968776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:20.539 [2024-11-20 18:29:38.968783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:20.539 [2024-11-20 18:29:38.968791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:20.539 [2024-11-20 18:29:38.968797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:20.539 [2024-11-20 18:29:38.968807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:20.539 [2024-11-20 18:29:38.968820] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:20.539 [2024-11-20 18:29:38.968834] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: cbb96cf1-d64b-463e-90b3-2025057ee758 00:20:20.539 [2024-11-20 18:29:38.968847] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:20.539 [2024-11-20 18:29:38.968858] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:20.540 [2024-11-20 18:29:38.968863] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:20.540 [2024-11-20 18:29:38.968872] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:20.540 [2024-11-20 18:29:38.968878] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:20.540 [2024-11-20 18:29:38.968887] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:20.540 [2024-11-20 18:29:38.968893] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:20.540 [2024-11-20 18:29:38.968900] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:20.540 [2024-11-20 18:29:38.968905] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:20.540 [2024-11-20 18:29:38.968913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.540 [2024-11-20 18:29:38.968919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:20.540 [2024-11-20 18:29:38.968929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.856 ms 00:20:20.540 [2024-11-20 18:29:38.968935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.540 [2024-11-20 18:29:38.979529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.540 [2024-11-20 18:29:38.979565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:20.540 [2024-11-20 18:29:38.979579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.568 ms 00:20:20.540 [2024-11-20 18:29:38.979585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.540 [2024-11-20 18:29:38.979918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.540 [2024-11-20 18:29:38.979928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:20.540 [2024-11-20 18:29:38.979937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.288 ms 00:20:20.540 [2024-11-20 18:29:38.979945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.540 [2024-11-20 18:29:39.017240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.540 [2024-11-20 18:29:39.017374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:20.540 [2024-11-20 18:29:39.017391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.540 [2024-11-20 18:29:39.017398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.540 [2024-11-20 18:29:39.017487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.540 [2024-11-20 18:29:39.017495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:20.540 [2024-11-20 18:29:39.017503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.540 [2024-11-20 18:29:39.017511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.540 [2024-11-20 18:29:39.017550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.540 [2024-11-20 18:29:39.017557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:20.540 [2024-11-20 18:29:39.017567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.540 [2024-11-20 18:29:39.017573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.540 [2024-11-20 18:29:39.017587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.540 [2024-11-20 18:29:39.017593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:20.540 [2024-11-20 18:29:39.017601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.540 [2024-11-20 18:29:39.017606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.540 [2024-11-20 18:29:39.077822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.540 [2024-11-20 18:29:39.077853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:20.540 [2024-11-20 18:29:39.077864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.540 [2024-11-20 18:29:39.077869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.540 [2024-11-20 18:29:39.126629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.540 [2024-11-20 18:29:39.126660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:20.540 [2024-11-20 18:29:39.126669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.540 [2024-11-20 18:29:39.126678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.540 [2024-11-20 18:29:39.126734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.540 [2024-11-20 18:29:39.126741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:20.540 [2024-11-20 18:29:39.126751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.540 [2024-11-20 18:29:39.126756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.540 [2024-11-20 18:29:39.126780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.540 [2024-11-20 18:29:39.126786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:20.540 [2024-11-20 18:29:39.126793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.540 [2024-11-20 18:29:39.126798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.540 [2024-11-20 18:29:39.126868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.540 [2024-11-20 18:29:39.126875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:20.540 [2024-11-20 18:29:39.126883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.540 [2024-11-20 18:29:39.126889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.540 [2024-11-20 18:29:39.126914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.540 [2024-11-20 18:29:39.126921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:20.540 [2024-11-20 18:29:39.126928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.540 [2024-11-20 18:29:39.126934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.540 [2024-11-20 18:29:39.126963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.540 [2024-11-20 18:29:39.126971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:20.540 [2024-11-20 18:29:39.126980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.540 [2024-11-20 18:29:39.126986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.540 [2024-11-20 18:29:39.127019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.540 [2024-11-20 18:29:39.127026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:20.540 [2024-11-20 18:29:39.127034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.540 [2024-11-20 18:29:39.127040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.540 [2024-11-20 18:29:39.127158] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 235.606 ms, result 0 00:20:21.109 18:29:39 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:21.109 18:29:39 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:21.109 [2024-11-20 18:29:39.702768] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:20:21.109 [2024-11-20 18:29:39.702899] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76593 ] 00:20:21.370 [2024-11-20 18:29:39.864238] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:21.370 [2024-11-20 18:29:39.983854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:21.943 [2024-11-20 18:29:40.275865] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:21.943 [2024-11-20 18:29:40.275944] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:21.943 [2024-11-20 18:29:40.438530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.943 [2024-11-20 18:29:40.438761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:21.943 [2024-11-20 18:29:40.438787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:21.943 [2024-11-20 18:29:40.438796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.943 [2024-11-20 18:29:40.441880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.943 [2024-11-20 18:29:40.442058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:21.943 [2024-11-20 18:29:40.442077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.055 ms 00:20:21.943 [2024-11-20 18:29:40.442086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.943 [2024-11-20 18:29:40.442222] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:21.943 [2024-11-20 18:29:40.442978] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:21.943 [2024-11-20 18:29:40.443017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.943 [2024-11-20 18:29:40.443026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:21.943 [2024-11-20 18:29:40.443036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.805 ms 00:20:21.943 [2024-11-20 18:29:40.443044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.943 [2024-11-20 18:29:40.444769] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:21.943 [2024-11-20 18:29:40.459266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.943 [2024-11-20 18:29:40.459443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:21.943 [2024-11-20 18:29:40.459511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.498 ms 00:20:21.943 [2024-11-20 18:29:40.459535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.943 [2024-11-20 18:29:40.459655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.943 [2024-11-20 18:29:40.459673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:21.943 [2024-11-20 18:29:40.459683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:20:21.943 [2024-11-20 18:29:40.459691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.943 [2024-11-20 18:29:40.468055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.943 [2024-11-20 18:29:40.468116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:21.943 [2024-11-20 18:29:40.468127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.316 ms 00:20:21.943 [2024-11-20 18:29:40.468136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.943 [2024-11-20 18:29:40.468244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.943 [2024-11-20 18:29:40.468255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:21.943 [2024-11-20 18:29:40.468265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:20:21.943 [2024-11-20 18:29:40.468274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.943 [2024-11-20 18:29:40.468303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.943 [2024-11-20 18:29:40.468315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:21.943 [2024-11-20 18:29:40.468325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:21.943 [2024-11-20 18:29:40.468333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.943 [2024-11-20 18:29:40.468355] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:21.943 [2024-11-20 18:29:40.472520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.943 [2024-11-20 18:29:40.472680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:21.943 [2024-11-20 18:29:40.472699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.170 ms 00:20:21.943 [2024-11-20 18:29:40.472726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.944 [2024-11-20 18:29:40.472806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.944 [2024-11-20 18:29:40.472817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:21.944 [2024-11-20 18:29:40.472827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:21.944 [2024-11-20 18:29:40.472835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.944 [2024-11-20 18:29:40.472858] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:21.944 [2024-11-20 18:29:40.472885] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:21.944 [2024-11-20 18:29:40.472922] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:21.944 [2024-11-20 18:29:40.472940] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:21.944 [2024-11-20 18:29:40.473047] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:21.944 [2024-11-20 18:29:40.473058] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:21.944 [2024-11-20 18:29:40.473069] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:21.944 [2024-11-20 18:29:40.473080] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:21.944 [2024-11-20 18:29:40.473115] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:21.944 [2024-11-20 18:29:40.473125] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:21.944 [2024-11-20 18:29:40.473133] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:21.944 [2024-11-20 18:29:40.473142] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:21.944 [2024-11-20 18:29:40.473150] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:21.944 [2024-11-20 18:29:40.473159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.944 [2024-11-20 18:29:40.473167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:21.944 [2024-11-20 18:29:40.473175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.303 ms 00:20:21.944 [2024-11-20 18:29:40.473182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.944 [2024-11-20 18:29:40.473275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.944 [2024-11-20 18:29:40.473285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:21.944 [2024-11-20 18:29:40.473297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:20:21.944 [2024-11-20 18:29:40.473305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.944 [2024-11-20 18:29:40.473409] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:21.944 [2024-11-20 18:29:40.473419] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:21.944 [2024-11-20 18:29:40.473429] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:21.944 [2024-11-20 18:29:40.473438] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:21.944 [2024-11-20 18:29:40.473447] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:21.944 [2024-11-20 18:29:40.473454] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:21.944 [2024-11-20 18:29:40.473460] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:21.944 [2024-11-20 18:29:40.473467] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:21.944 [2024-11-20 18:29:40.473474] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:21.944 [2024-11-20 18:29:40.473480] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:21.944 [2024-11-20 18:29:40.473488] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:21.944 [2024-11-20 18:29:40.473496] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:21.944 [2024-11-20 18:29:40.473503] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:21.944 [2024-11-20 18:29:40.473517] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:21.944 [2024-11-20 18:29:40.473523] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:21.944 [2024-11-20 18:29:40.473530] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:21.944 [2024-11-20 18:29:40.473537] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:21.944 [2024-11-20 18:29:40.473545] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:21.944 [2024-11-20 18:29:40.473551] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:21.944 [2024-11-20 18:29:40.473561] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:21.944 [2024-11-20 18:29:40.473570] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:21.944 [2024-11-20 18:29:40.473578] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:21.944 [2024-11-20 18:29:40.473585] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:21.944 [2024-11-20 18:29:40.473592] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:21.944 [2024-11-20 18:29:40.473599] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:21.944 [2024-11-20 18:29:40.473606] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:21.944 [2024-11-20 18:29:40.473614] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:21.944 [2024-11-20 18:29:40.473620] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:21.944 [2024-11-20 18:29:40.473627] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:21.944 [2024-11-20 18:29:40.473634] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:21.944 [2024-11-20 18:29:40.473640] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:21.944 [2024-11-20 18:29:40.473647] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:21.944 [2024-11-20 18:29:40.473654] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:21.944 [2024-11-20 18:29:40.473661] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:21.944 [2024-11-20 18:29:40.473668] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:21.944 [2024-11-20 18:29:40.473675] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:21.944 [2024-11-20 18:29:40.473681] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:21.944 [2024-11-20 18:29:40.473688] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:21.944 [2024-11-20 18:29:40.473696] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:21.944 [2024-11-20 18:29:40.473703] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:21.944 [2024-11-20 18:29:40.473711] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:21.944 [2024-11-20 18:29:40.473717] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:21.944 [2024-11-20 18:29:40.473724] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:21.944 [2024-11-20 18:29:40.473732] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:21.944 [2024-11-20 18:29:40.473740] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:21.944 [2024-11-20 18:29:40.473748] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:21.944 [2024-11-20 18:29:40.473757] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:21.944 [2024-11-20 18:29:40.473766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:21.944 [2024-11-20 18:29:40.473772] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:21.944 [2024-11-20 18:29:40.473780] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:21.944 [2024-11-20 18:29:40.473787] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:21.944 [2024-11-20 18:29:40.473795] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:21.944 [2024-11-20 18:29:40.473802] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:21.944 [2024-11-20 18:29:40.473811] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:21.944 [2024-11-20 18:29:40.473821] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:21.944 [2024-11-20 18:29:40.473830] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:21.944 [2024-11-20 18:29:40.473840] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:21.944 [2024-11-20 18:29:40.473846] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:21.944 [2024-11-20 18:29:40.473855] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:21.944 [2024-11-20 18:29:40.473862] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:21.944 [2024-11-20 18:29:40.473870] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:21.944 [2024-11-20 18:29:40.473877] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:21.944 [2024-11-20 18:29:40.473885] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:21.944 [2024-11-20 18:29:40.473892] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:21.944 [2024-11-20 18:29:40.473899] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:21.945 [2024-11-20 18:29:40.473906] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:21.945 [2024-11-20 18:29:40.473913] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:21.945 [2024-11-20 18:29:40.473921] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:21.945 [2024-11-20 18:29:40.473928] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:21.945 [2024-11-20 18:29:40.473936] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:21.945 [2024-11-20 18:29:40.473944] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:21.945 [2024-11-20 18:29:40.473952] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:21.945 [2024-11-20 18:29:40.473960] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:21.945 [2024-11-20 18:29:40.473967] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:21.945 [2024-11-20 18:29:40.473975] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:21.945 [2024-11-20 18:29:40.473982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.945 [2024-11-20 18:29:40.473989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:21.945 [2024-11-20 18:29:40.474000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.642 ms 00:20:21.945 [2024-11-20 18:29:40.474008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.945 [2024-11-20 18:29:40.505917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.945 [2024-11-20 18:29:40.505966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:21.945 [2024-11-20 18:29:40.505978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.857 ms 00:20:21.945 [2024-11-20 18:29:40.505987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.945 [2024-11-20 18:29:40.506141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.945 [2024-11-20 18:29:40.506158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:21.945 [2024-11-20 18:29:40.506168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:20:21.945 [2024-11-20 18:29:40.506176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.945 [2024-11-20 18:29:40.548743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.945 [2024-11-20 18:29:40.548795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:21.945 [2024-11-20 18:29:40.548809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.542 ms 00:20:21.945 [2024-11-20 18:29:40.548822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.945 [2024-11-20 18:29:40.548935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.945 [2024-11-20 18:29:40.548948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:21.945 [2024-11-20 18:29:40.548957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:21.945 [2024-11-20 18:29:40.548965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.945 [2024-11-20 18:29:40.549513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.945 [2024-11-20 18:29:40.549548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:21.945 [2024-11-20 18:29:40.549559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.523 ms 00:20:21.945 [2024-11-20 18:29:40.549577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.945 [2024-11-20 18:29:40.549736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.945 [2024-11-20 18:29:40.549747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:21.945 [2024-11-20 18:29:40.549756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:20:21.945 [2024-11-20 18:29:40.549764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.945 [2024-11-20 18:29:40.566012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.945 [2024-11-20 18:29:40.566060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:21.945 [2024-11-20 18:29:40.566072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.224 ms 00:20:21.945 [2024-11-20 18:29:40.566080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.206 [2024-11-20 18:29:40.580654] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:22.206 [2024-11-20 18:29:40.580712] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:22.206 [2024-11-20 18:29:40.580727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.206 [2024-11-20 18:29:40.580736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:22.206 [2024-11-20 18:29:40.580746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.514 ms 00:20:22.206 [2024-11-20 18:29:40.580753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.206 [2024-11-20 18:29:40.606421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.206 [2024-11-20 18:29:40.606479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:22.206 [2024-11-20 18:29:40.606492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.573 ms 00:20:22.206 [2024-11-20 18:29:40.606500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.207 [2024-11-20 18:29:40.619332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.207 [2024-11-20 18:29:40.619376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:22.207 [2024-11-20 18:29:40.619388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.741 ms 00:20:22.207 [2024-11-20 18:29:40.619396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.207 [2024-11-20 18:29:40.631957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.207 [2024-11-20 18:29:40.632002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:22.207 [2024-11-20 18:29:40.632015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.475 ms 00:20:22.207 [2024-11-20 18:29:40.632023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.207 [2024-11-20 18:29:40.632714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.207 [2024-11-20 18:29:40.632741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:22.207 [2024-11-20 18:29:40.632752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.543 ms 00:20:22.207 [2024-11-20 18:29:40.632760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.207 [2024-11-20 18:29:40.696831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.207 [2024-11-20 18:29:40.696897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:22.207 [2024-11-20 18:29:40.696912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.042 ms 00:20:22.207 [2024-11-20 18:29:40.696922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.207 [2024-11-20 18:29:40.708074] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:22.207 [2024-11-20 18:29:40.726715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.207 [2024-11-20 18:29:40.726769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:22.207 [2024-11-20 18:29:40.726783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.696 ms 00:20:22.207 [2024-11-20 18:29:40.726791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.207 [2024-11-20 18:29:40.726887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.207 [2024-11-20 18:29:40.726898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:22.207 [2024-11-20 18:29:40.726909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:22.207 [2024-11-20 18:29:40.726917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.207 [2024-11-20 18:29:40.726976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.207 [2024-11-20 18:29:40.726986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:22.207 [2024-11-20 18:29:40.726995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:20:22.207 [2024-11-20 18:29:40.727004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.207 [2024-11-20 18:29:40.727031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.207 [2024-11-20 18:29:40.727043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:22.207 [2024-11-20 18:29:40.727051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:22.207 [2024-11-20 18:29:40.727059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.207 [2024-11-20 18:29:40.727135] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:22.207 [2024-11-20 18:29:40.727147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.207 [2024-11-20 18:29:40.727156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:22.207 [2024-11-20 18:29:40.727165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:22.207 [2024-11-20 18:29:40.727173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.207 [2024-11-20 18:29:40.752890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.207 [2024-11-20 18:29:40.752939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:22.207 [2024-11-20 18:29:40.752953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.691 ms 00:20:22.207 [2024-11-20 18:29:40.752962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.207 [2024-11-20 18:29:40.753127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.207 [2024-11-20 18:29:40.753141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:22.207 [2024-11-20 18:29:40.753152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:20:22.207 [2024-11-20 18:29:40.753160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.207 [2024-11-20 18:29:40.754251] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:22.207 [2024-11-20 18:29:40.757736] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 315.355 ms, result 0 00:20:22.207 [2024-11-20 18:29:40.759191] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:22.207 [2024-11-20 18:29:40.772670] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:23.154  [2024-11-20T18:29:43.165Z] Copying: 24/256 [MB] (24 MBps) [2024-11-20T18:29:44.109Z] Copying: 46/256 [MB] (22 MBps) [2024-11-20T18:29:45.053Z] Copying: 69/256 [MB] (22 MBps) [2024-11-20T18:29:45.994Z] Copying: 88/256 [MB] (19 MBps) [2024-11-20T18:29:46.935Z] Copying: 108/256 [MB] (19 MBps) [2024-11-20T18:29:47.874Z] Copying: 128/256 [MB] (20 MBps) [2024-11-20T18:29:48.814Z] Copying: 150/256 [MB] (21 MBps) [2024-11-20T18:29:50.208Z] Copying: 170/256 [MB] (20 MBps) [2024-11-20T18:29:51.147Z] Copying: 188/256 [MB] (17 MBps) [2024-11-20T18:29:52.084Z] Copying: 211/256 [MB] (22 MBps) [2024-11-20T18:29:53.026Z] Copying: 236/256 [MB] (25 MBps) [2024-11-20T18:29:53.026Z] Copying: 256/256 [MB] (average 21 MBps)[2024-11-20 18:29:52.663773] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:34.397 [2024-11-20 18:29:52.672758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.397 [2024-11-20 18:29:52.672801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:34.397 [2024-11-20 18:29:52.672818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:34.397 [2024-11-20 18:29:52.672833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.397 [2024-11-20 18:29:52.672855] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:34.397 [2024-11-20 18:29:52.675628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.397 [2024-11-20 18:29:52.675663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:34.397 [2024-11-20 18:29:52.675673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.760 ms 00:20:34.397 [2024-11-20 18:29:52.675681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.397 [2024-11-20 18:29:52.675915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.397 [2024-11-20 18:29:52.675926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:34.397 [2024-11-20 18:29:52.675934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.201 ms 00:20:34.397 [2024-11-20 18:29:52.675941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.397 [2024-11-20 18:29:52.678766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.397 [2024-11-20 18:29:52.678979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:34.398 [2024-11-20 18:29:52.678996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.813 ms 00:20:34.398 [2024-11-20 18:29:52.679005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.398 [2024-11-20 18:29:52.684467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.398 [2024-11-20 18:29:52.684582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:34.398 [2024-11-20 18:29:52.684639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.438 ms 00:20:34.398 [2024-11-20 18:29:52.684658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.398 [2024-11-20 18:29:52.703915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.398 [2024-11-20 18:29:52.704047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:34.398 [2024-11-20 18:29:52.704118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.200 ms 00:20:34.398 [2024-11-20 18:29:52.704138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.398 [2024-11-20 18:29:52.726764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.398 [2024-11-20 18:29:52.726922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:34.398 [2024-11-20 18:29:52.726997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.575 ms 00:20:34.398 [2024-11-20 18:29:52.727029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.398 [2024-11-20 18:29:52.727211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.398 [2024-11-20 18:29:52.727265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:34.398 [2024-11-20 18:29:52.727300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:20:34.398 [2024-11-20 18:29:52.727320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.398 [2024-11-20 18:29:52.751116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.398 [2024-11-20 18:29:52.751236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:34.398 [2024-11-20 18:29:52.751289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.760 ms 00:20:34.398 [2024-11-20 18:29:52.751310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.398 [2024-11-20 18:29:52.775248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.398 [2024-11-20 18:29:52.775355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:34.398 [2024-11-20 18:29:52.775406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.895 ms 00:20:34.398 [2024-11-20 18:29:52.775427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.398 [2024-11-20 18:29:52.798181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.398 [2024-11-20 18:29:52.798287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:34.398 [2024-11-20 18:29:52.798335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.713 ms 00:20:34.398 [2024-11-20 18:29:52.798357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.398 [2024-11-20 18:29:52.821460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.398 [2024-11-20 18:29:52.821590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:34.398 [2024-11-20 18:29:52.821648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.811 ms 00:20:34.398 [2024-11-20 18:29:52.821671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.398 [2024-11-20 18:29:52.821754] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:34.398 [2024-11-20 18:29:52.821803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:34.398 [2024-11-20 18:29:52.821886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:34.398 [2024-11-20 18:29:52.821919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:34.398 [2024-11-20 18:29:52.821948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:34.398 [2024-11-20 18:29:52.822258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:34.398 [2024-11-20 18:29:52.822321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:34.398 [2024-11-20 18:29:52.822351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:34.398 [2024-11-20 18:29:52.822442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:34.398 [2024-11-20 18:29:52.822476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:34.398 [2024-11-20 18:29:52.822505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:34.398 [2024-11-20 18:29:52.822534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:34.398 [2024-11-20 18:29:52.822596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:34.398 [2024-11-20 18:29:52.822626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:34.398 [2024-11-20 18:29:52.822655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:34.398 [2024-11-20 18:29:52.822684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:34.398 [2024-11-20 18:29:52.822713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:34.398 [2024-11-20 18:29:52.822742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:34.398 [2024-11-20 18:29:52.822771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:34.398 [2024-11-20 18:29:52.822800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:34.398 [2024-11-20 18:29:52.822886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:34.398 [2024-11-20 18:29:52.822916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:34.398 [2024-11-20 18:29:52.822944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:34.398 [2024-11-20 18:29:52.822973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:34.398 [2024-11-20 18:29:52.823002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:34.398 [2024-11-20 18:29:52.823031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:34.398 [2024-11-20 18:29:52.823059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:34.398 [2024-11-20 18:29:52.823087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:34.398 [2024-11-20 18:29:52.823157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:34.398 [2024-11-20 18:29:52.823189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:34.398 [2024-11-20 18:29:52.823218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:34.398 [2024-11-20 18:29:52.823247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:34.398 [2024-11-20 18:29:52.823275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:34.398 [2024-11-20 18:29:52.823304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:34.398 [2024-11-20 18:29:52.823358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:34.398 [2024-11-20 18:29:52.823390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:34.398 [2024-11-20 18:29:52.823419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:34.398 [2024-11-20 18:29:52.823448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:34.398 [2024-11-20 18:29:52.823477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:34.398 [2024-11-20 18:29:52.823506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:34.398 [2024-11-20 18:29:52.823534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:34.398 [2024-11-20 18:29:52.823594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:34.398 [2024-11-20 18:29:52.823624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:34.398 [2024-11-20 18:29:52.823652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:34.398 [2024-11-20 18:29:52.823680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:34.398 [2024-11-20 18:29:52.823710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:34.398 [2024-11-20 18:29:52.823738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:34.398 [2024-11-20 18:29:52.823877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:34.398 [2024-11-20 18:29:52.823912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:34.398 [2024-11-20 18:29:52.823941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:34.398 [2024-11-20 18:29:52.824047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:34.398 [2024-11-20 18:29:52.824076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:34.398 [2024-11-20 18:29:52.824145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:34.398 [2024-11-20 18:29:52.824175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:34.398 [2024-11-20 18:29:52.824422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:34.398 [2024-11-20 18:29:52.824442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:34.398 [2024-11-20 18:29:52.824453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:34.398 [2024-11-20 18:29:52.824461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:34.398 [2024-11-20 18:29:52.824469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:34.398 [2024-11-20 18:29:52.824477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:34.398 [2024-11-20 18:29:52.824484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:34.399 [2024-11-20 18:29:52.824492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:34.399 [2024-11-20 18:29:52.824500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:34.399 [2024-11-20 18:29:52.824508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:34.399 [2024-11-20 18:29:52.824517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:34.399 [2024-11-20 18:29:52.824524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:34.399 [2024-11-20 18:29:52.824531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:34.399 [2024-11-20 18:29:52.824539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:34.399 [2024-11-20 18:29:52.824546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:34.399 [2024-11-20 18:29:52.824554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:34.399 [2024-11-20 18:29:52.824562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:34.399 [2024-11-20 18:29:52.824570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:34.399 [2024-11-20 18:29:52.824578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:34.399 [2024-11-20 18:29:52.824586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:34.399 [2024-11-20 18:29:52.824593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:34.399 [2024-11-20 18:29:52.824600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:34.399 [2024-11-20 18:29:52.824608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:34.399 [2024-11-20 18:29:52.824615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:34.399 [2024-11-20 18:29:52.824622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:34.399 [2024-11-20 18:29:52.824630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:34.399 [2024-11-20 18:29:52.824638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:34.399 [2024-11-20 18:29:52.824645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:34.399 [2024-11-20 18:29:52.824653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:34.399 [2024-11-20 18:29:52.824660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:34.399 [2024-11-20 18:29:52.824668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:34.399 [2024-11-20 18:29:52.824676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:34.399 [2024-11-20 18:29:52.824683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:34.399 [2024-11-20 18:29:52.824691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:34.399 [2024-11-20 18:29:52.824698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:34.399 [2024-11-20 18:29:52.824706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:34.399 [2024-11-20 18:29:52.824713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:34.399 [2024-11-20 18:29:52.824720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:34.399 [2024-11-20 18:29:52.824728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:34.399 [2024-11-20 18:29:52.824747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:34.399 [2024-11-20 18:29:52.824755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:34.399 [2024-11-20 18:29:52.824763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:34.399 [2024-11-20 18:29:52.824778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:34.399 [2024-11-20 18:29:52.824786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:34.399 [2024-11-20 18:29:52.824793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:34.399 [2024-11-20 18:29:52.824801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:34.399 [2024-11-20 18:29:52.824808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:34.399 [2024-11-20 18:29:52.824824] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:34.399 [2024-11-20 18:29:52.824834] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: cbb96cf1-d64b-463e-90b3-2025057ee758 00:20:34.399 [2024-11-20 18:29:52.824842] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:34.399 [2024-11-20 18:29:52.824850] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:34.399 [2024-11-20 18:29:52.824857] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:34.399 [2024-11-20 18:29:52.824865] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:34.399 [2024-11-20 18:29:52.824872] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:34.399 [2024-11-20 18:29:52.824880] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:34.399 [2024-11-20 18:29:52.824887] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:34.399 [2024-11-20 18:29:52.824894] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:34.399 [2024-11-20 18:29:52.824900] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:34.399 [2024-11-20 18:29:52.824909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.399 [2024-11-20 18:29:52.824920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:34.399 [2024-11-20 18:29:52.824929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.157 ms 00:20:34.399 [2024-11-20 18:29:52.824937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.399 [2024-11-20 18:29:52.837359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.399 [2024-11-20 18:29:52.837391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:34.399 [2024-11-20 18:29:52.837401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.374 ms 00:20:34.399 [2024-11-20 18:29:52.837408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.399 [2024-11-20 18:29:52.837774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.399 [2024-11-20 18:29:52.837783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:34.399 [2024-11-20 18:29:52.837792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.331 ms 00:20:34.399 [2024-11-20 18:29:52.837799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.399 [2024-11-20 18:29:52.873206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:34.399 [2024-11-20 18:29:52.873336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:34.399 [2024-11-20 18:29:52.873352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:34.399 [2024-11-20 18:29:52.873361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.399 [2024-11-20 18:29:52.873454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:34.399 [2024-11-20 18:29:52.873463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:34.399 [2024-11-20 18:29:52.873472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:34.399 [2024-11-20 18:29:52.873479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.399 [2024-11-20 18:29:52.873520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:34.399 [2024-11-20 18:29:52.873529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:34.399 [2024-11-20 18:29:52.873537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:34.399 [2024-11-20 18:29:52.873544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.399 [2024-11-20 18:29:52.873560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:34.399 [2024-11-20 18:29:52.873570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:34.399 [2024-11-20 18:29:52.873578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:34.399 [2024-11-20 18:29:52.873585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.399 [2024-11-20 18:29:52.950380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:34.399 [2024-11-20 18:29:52.950420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:34.399 [2024-11-20 18:29:52.950431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:34.399 [2024-11-20 18:29:52.950439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.399 [2024-11-20 18:29:53.013773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:34.399 [2024-11-20 18:29:53.013817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:34.399 [2024-11-20 18:29:53.013827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:34.399 [2024-11-20 18:29:53.013835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.399 [2024-11-20 18:29:53.013894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:34.399 [2024-11-20 18:29:53.013903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:34.399 [2024-11-20 18:29:53.013912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:34.399 [2024-11-20 18:29:53.013919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.399 [2024-11-20 18:29:53.013947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:34.399 [2024-11-20 18:29:53.013954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:34.399 [2024-11-20 18:29:53.013966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:34.399 [2024-11-20 18:29:53.013973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.399 [2024-11-20 18:29:53.014061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:34.399 [2024-11-20 18:29:53.014071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:34.399 [2024-11-20 18:29:53.014079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:34.399 [2024-11-20 18:29:53.014086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.399 [2024-11-20 18:29:53.014138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:34.399 [2024-11-20 18:29:53.014148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:34.400 [2024-11-20 18:29:53.014156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:34.400 [2024-11-20 18:29:53.014166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.400 [2024-11-20 18:29:53.014204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:34.400 [2024-11-20 18:29:53.014213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:34.400 [2024-11-20 18:29:53.014220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:34.400 [2024-11-20 18:29:53.014228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.400 [2024-11-20 18:29:53.014273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:34.400 [2024-11-20 18:29:53.014282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:34.400 [2024-11-20 18:29:53.014294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:34.400 [2024-11-20 18:29:53.014301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.400 [2024-11-20 18:29:53.014439] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 341.672 ms, result 0 00:20:35.337 00:20:35.337 00:20:35.337 18:29:53 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:20:35.337 18:29:53 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:35.905 18:29:54 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:35.905 [2024-11-20 18:29:54.377055] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:20:35.905 [2024-11-20 18:29:54.377381] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76747 ] 00:20:36.165 [2024-11-20 18:29:54.539350] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:36.165 [2024-11-20 18:29:54.652166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:36.424 [2024-11-20 18:29:54.929903] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:36.424 [2024-11-20 18:29:54.929966] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:36.689 [2024-11-20 18:29:55.088219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.689 [2024-11-20 18:29:55.088263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:36.689 [2024-11-20 18:29:55.088276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:36.689 [2024-11-20 18:29:55.088284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.689 [2024-11-20 18:29:55.090948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.689 [2024-11-20 18:29:55.090985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:36.689 [2024-11-20 18:29:55.090995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.648 ms 00:20:36.689 [2024-11-20 18:29:55.091002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.689 [2024-11-20 18:29:55.091076] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:36.689 [2024-11-20 18:29:55.091867] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:36.689 [2024-11-20 18:29:55.091902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.689 [2024-11-20 18:29:55.091910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:36.689 [2024-11-20 18:29:55.091919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.832 ms 00:20:36.689 [2024-11-20 18:29:55.091926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.689 [2024-11-20 18:29:55.093118] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:36.689 [2024-11-20 18:29:55.105724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.689 [2024-11-20 18:29:55.105761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:36.689 [2024-11-20 18:29:55.105772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.608 ms 00:20:36.689 [2024-11-20 18:29:55.105780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.689 [2024-11-20 18:29:55.105868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.689 [2024-11-20 18:29:55.105879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:36.689 [2024-11-20 18:29:55.105888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:20:36.689 [2024-11-20 18:29:55.105895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.689 [2024-11-20 18:29:55.111004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.689 [2024-11-20 18:29:55.111033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:36.689 [2024-11-20 18:29:55.111042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.070 ms 00:20:36.689 [2024-11-20 18:29:55.111050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.689 [2024-11-20 18:29:55.111151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.689 [2024-11-20 18:29:55.111162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:36.689 [2024-11-20 18:29:55.111171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:20:36.689 [2024-11-20 18:29:55.111178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.689 [2024-11-20 18:29:55.111203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.689 [2024-11-20 18:29:55.111213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:36.689 [2024-11-20 18:29:55.111221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:36.689 [2024-11-20 18:29:55.111229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.689 [2024-11-20 18:29:55.111249] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:36.689 [2024-11-20 18:29:55.114589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.689 [2024-11-20 18:29:55.114618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:36.689 [2024-11-20 18:29:55.114627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.345 ms 00:20:36.689 [2024-11-20 18:29:55.114634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.689 [2024-11-20 18:29:55.114668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.689 [2024-11-20 18:29:55.114676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:36.689 [2024-11-20 18:29:55.114685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:36.689 [2024-11-20 18:29:55.114691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.689 [2024-11-20 18:29:55.114708] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:36.689 [2024-11-20 18:29:55.114728] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:36.689 [2024-11-20 18:29:55.114762] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:36.689 [2024-11-20 18:29:55.114776] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:36.689 [2024-11-20 18:29:55.114877] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:36.689 [2024-11-20 18:29:55.114887] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:36.689 [2024-11-20 18:29:55.114897] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:36.689 [2024-11-20 18:29:55.114907] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:36.689 [2024-11-20 18:29:55.114918] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:36.689 [2024-11-20 18:29:55.114927] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:36.689 [2024-11-20 18:29:55.114934] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:36.689 [2024-11-20 18:29:55.114941] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:36.689 [2024-11-20 18:29:55.114948] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:36.689 [2024-11-20 18:29:55.114955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.689 [2024-11-20 18:29:55.114963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:36.689 [2024-11-20 18:29:55.114970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.249 ms 00:20:36.689 [2024-11-20 18:29:55.114977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.689 [2024-11-20 18:29:55.115063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.689 [2024-11-20 18:29:55.115071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:36.689 [2024-11-20 18:29:55.115081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:20:36.689 [2024-11-20 18:29:55.115088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.689 [2024-11-20 18:29:55.115219] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:36.689 [2024-11-20 18:29:55.115230] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:36.689 [2024-11-20 18:29:55.115239] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:36.689 [2024-11-20 18:29:55.115246] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:36.689 [2024-11-20 18:29:55.115254] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:36.689 [2024-11-20 18:29:55.115260] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:36.689 [2024-11-20 18:29:55.115266] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:36.689 [2024-11-20 18:29:55.115274] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:36.689 [2024-11-20 18:29:55.115281] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:36.689 [2024-11-20 18:29:55.115288] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:36.689 [2024-11-20 18:29:55.115295] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:36.689 [2024-11-20 18:29:55.115301] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:36.689 [2024-11-20 18:29:55.115307] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:36.690 [2024-11-20 18:29:55.115320] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:36.690 [2024-11-20 18:29:55.115327] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:36.690 [2024-11-20 18:29:55.115334] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:36.690 [2024-11-20 18:29:55.115342] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:36.690 [2024-11-20 18:29:55.115348] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:36.690 [2024-11-20 18:29:55.115355] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:36.690 [2024-11-20 18:29:55.115361] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:36.690 [2024-11-20 18:29:55.115367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:36.690 [2024-11-20 18:29:55.115374] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:36.690 [2024-11-20 18:29:55.115381] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:36.690 [2024-11-20 18:29:55.115387] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:36.690 [2024-11-20 18:29:55.115394] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:36.690 [2024-11-20 18:29:55.115400] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:36.690 [2024-11-20 18:29:55.115407] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:36.690 [2024-11-20 18:29:55.115413] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:36.690 [2024-11-20 18:29:55.115419] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:36.690 [2024-11-20 18:29:55.115425] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:36.690 [2024-11-20 18:29:55.115432] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:36.690 [2024-11-20 18:29:55.115438] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:36.690 [2024-11-20 18:29:55.115444] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:36.690 [2024-11-20 18:29:55.115451] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:36.690 [2024-11-20 18:29:55.115457] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:36.690 [2024-11-20 18:29:55.115464] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:36.690 [2024-11-20 18:29:55.115470] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:36.690 [2024-11-20 18:29:55.115477] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:36.690 [2024-11-20 18:29:55.115483] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:36.690 [2024-11-20 18:29:55.115490] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:36.690 [2024-11-20 18:29:55.115496] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:36.690 [2024-11-20 18:29:55.115502] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:36.690 [2024-11-20 18:29:55.115509] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:36.690 [2024-11-20 18:29:55.115516] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:36.690 [2024-11-20 18:29:55.115523] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:36.690 [2024-11-20 18:29:55.115529] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:36.690 [2024-11-20 18:29:55.115538] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:36.690 [2024-11-20 18:29:55.115545] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:36.690 [2024-11-20 18:29:55.115553] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:36.690 [2024-11-20 18:29:55.115560] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:36.690 [2024-11-20 18:29:55.115566] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:36.690 [2024-11-20 18:29:55.115573] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:36.690 [2024-11-20 18:29:55.115580] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:36.690 [2024-11-20 18:29:55.115587] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:36.690 [2024-11-20 18:29:55.115597] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:36.690 [2024-11-20 18:29:55.115604] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:36.690 [2024-11-20 18:29:55.115612] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:36.690 [2024-11-20 18:29:55.115619] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:36.690 [2024-11-20 18:29:55.115625] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:36.690 [2024-11-20 18:29:55.115632] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:36.690 [2024-11-20 18:29:55.115639] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:36.690 [2024-11-20 18:29:55.115646] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:36.690 [2024-11-20 18:29:55.115652] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:36.690 [2024-11-20 18:29:55.115659] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:36.690 [2024-11-20 18:29:55.115665] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:36.690 [2024-11-20 18:29:55.115672] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:36.690 [2024-11-20 18:29:55.115679] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:36.690 [2024-11-20 18:29:55.115686] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:36.690 [2024-11-20 18:29:55.115693] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:36.690 [2024-11-20 18:29:55.115700] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:36.690 [2024-11-20 18:29:55.115708] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:36.690 [2024-11-20 18:29:55.115716] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:36.690 [2024-11-20 18:29:55.115724] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:36.690 [2024-11-20 18:29:55.115730] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:36.690 [2024-11-20 18:29:55.115737] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:36.690 [2024-11-20 18:29:55.115744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.690 [2024-11-20 18:29:55.115751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:36.690 [2024-11-20 18:29:55.115760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.593 ms 00:20:36.690 [2024-11-20 18:29:55.115767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.690 [2024-11-20 18:29:55.142334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.690 [2024-11-20 18:29:55.142368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:36.690 [2024-11-20 18:29:55.142378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.517 ms 00:20:36.690 [2024-11-20 18:29:55.142385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.690 [2024-11-20 18:29:55.142504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.690 [2024-11-20 18:29:55.142518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:36.690 [2024-11-20 18:29:55.142527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:20:36.690 [2024-11-20 18:29:55.142534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.690 [2024-11-20 18:29:55.188584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.690 [2024-11-20 18:29:55.188622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:36.690 [2024-11-20 18:29:55.188634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.029 ms 00:20:36.690 [2024-11-20 18:29:55.188645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.690 [2024-11-20 18:29:55.188736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.690 [2024-11-20 18:29:55.188757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:36.690 [2024-11-20 18:29:55.188766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:36.690 [2024-11-20 18:29:55.188773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.690 [2024-11-20 18:29:55.189128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.690 [2024-11-20 18:29:55.189144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:36.690 [2024-11-20 18:29:55.189153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.332 ms 00:20:36.690 [2024-11-20 18:29:55.189166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.690 [2024-11-20 18:29:55.189293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.690 [2024-11-20 18:29:55.189310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:36.690 [2024-11-20 18:29:55.189319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:20:36.690 [2024-11-20 18:29:55.189326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.690 [2024-11-20 18:29:55.202856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.690 [2024-11-20 18:29:55.202994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:36.690 [2024-11-20 18:29:55.203010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.510 ms 00:20:36.690 [2024-11-20 18:29:55.203018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.690 [2024-11-20 18:29:55.215885] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:36.690 [2024-11-20 18:29:55.215919] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:36.690 [2024-11-20 18:29:55.215930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.690 [2024-11-20 18:29:55.215938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:36.690 [2024-11-20 18:29:55.215946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.796 ms 00:20:36.690 [2024-11-20 18:29:55.215953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.690 [2024-11-20 18:29:55.240289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.691 [2024-11-20 18:29:55.240329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:36.691 [2024-11-20 18:29:55.240340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.263 ms 00:20:36.691 [2024-11-20 18:29:55.240349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.691 [2024-11-20 18:29:55.252132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.691 [2024-11-20 18:29:55.252161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:36.691 [2024-11-20 18:29:55.252171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.711 ms 00:20:36.691 [2024-11-20 18:29:55.252177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.691 [2024-11-20 18:29:55.263755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.691 [2024-11-20 18:29:55.263784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:36.691 [2024-11-20 18:29:55.263794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.514 ms 00:20:36.691 [2024-11-20 18:29:55.263801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.691 [2024-11-20 18:29:55.264428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.691 [2024-11-20 18:29:55.264447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:36.691 [2024-11-20 18:29:55.264456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.537 ms 00:20:36.691 [2024-11-20 18:29:55.264463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.014 [2024-11-20 18:29:55.320435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.014 [2024-11-20 18:29:55.320489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:37.014 [2024-11-20 18:29:55.320502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.948 ms 00:20:37.014 [2024-11-20 18:29:55.320511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.014 [2024-11-20 18:29:55.331101] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:37.014 [2024-11-20 18:29:55.346225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.014 [2024-11-20 18:29:55.346265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:37.014 [2024-11-20 18:29:55.346277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.609 ms 00:20:37.014 [2024-11-20 18:29:55.346285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.014 [2024-11-20 18:29:55.346373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.014 [2024-11-20 18:29:55.346384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:37.014 [2024-11-20 18:29:55.346393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:37.014 [2024-11-20 18:29:55.346401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.014 [2024-11-20 18:29:55.346450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.014 [2024-11-20 18:29:55.346459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:37.014 [2024-11-20 18:29:55.346468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:20:37.014 [2024-11-20 18:29:55.346475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.014 [2024-11-20 18:29:55.346499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.014 [2024-11-20 18:29:55.346509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:37.014 [2024-11-20 18:29:55.346517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:37.014 [2024-11-20 18:29:55.346524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.014 [2024-11-20 18:29:55.346556] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:37.014 [2024-11-20 18:29:55.346566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.014 [2024-11-20 18:29:55.346574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:37.014 [2024-11-20 18:29:55.346582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:37.014 [2024-11-20 18:29:55.346590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.014 [2024-11-20 18:29:55.371098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.014 [2024-11-20 18:29:55.371136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:37.014 [2024-11-20 18:29:55.371148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.483 ms 00:20:37.014 [2024-11-20 18:29:55.371156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.014 [2024-11-20 18:29:55.371256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.014 [2024-11-20 18:29:55.371268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:37.014 [2024-11-20 18:29:55.371277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:20:37.014 [2024-11-20 18:29:55.371285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.014 [2024-11-20 18:29:55.372158] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:37.014 [2024-11-20 18:29:55.375373] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 283.626 ms, result 0 00:20:37.014 [2024-11-20 18:29:55.376737] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:37.014 [2024-11-20 18:29:55.389959] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:37.291  [2024-11-20T18:29:55.920Z] Copying: 4096/4096 [kB] (average 10 MBps)[2024-11-20 18:29:55.779497] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:37.291 [2024-11-20 18:29:55.789462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.291 [2024-11-20 18:29:55.789665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:37.291 [2024-11-20 18:29:55.789689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:37.291 [2024-11-20 18:29:55.789709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.291 [2024-11-20 18:29:55.789742] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:37.291 [2024-11-20 18:29:55.792817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.291 [2024-11-20 18:29:55.792980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:37.291 [2024-11-20 18:29:55.793002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.060 ms 00:20:37.291 [2024-11-20 18:29:55.793012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.291 [2024-11-20 18:29:55.795866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.291 [2024-11-20 18:29:55.795909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:37.291 [2024-11-20 18:29:55.795921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.819 ms 00:20:37.291 [2024-11-20 18:29:55.795931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.291 [2024-11-20 18:29:55.800685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.291 [2024-11-20 18:29:55.800845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:37.291 [2024-11-20 18:29:55.800910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.736 ms 00:20:37.291 [2024-11-20 18:29:55.800934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.291 [2024-11-20 18:29:55.807960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.291 [2024-11-20 18:29:55.808137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:37.291 [2024-11-20 18:29:55.808332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.973 ms 00:20:37.292 [2024-11-20 18:29:55.808376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.292 [2024-11-20 18:29:55.834516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.292 [2024-11-20 18:29:55.834696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:37.292 [2024-11-20 18:29:55.835135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.072 ms 00:20:37.292 [2024-11-20 18:29:55.835193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.292 [2024-11-20 18:29:55.851530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.292 [2024-11-20 18:29:55.851722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:37.292 [2024-11-20 18:29:55.851933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.149 ms 00:20:37.292 [2024-11-20 18:29:55.851978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.292 [2024-11-20 18:29:55.852170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.292 [2024-11-20 18:29:55.852288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:37.292 [2024-11-20 18:29:55.852309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:20:37.292 [2024-11-20 18:29:55.852372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.292 [2024-11-20 18:29:55.878757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.292 [2024-11-20 18:29:55.878924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:37.292 [2024-11-20 18:29:55.878984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.335 ms 00:20:37.292 [2024-11-20 18:29:55.879006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.292 [2024-11-20 18:29:55.904981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.292 [2024-11-20 18:29:55.905166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:37.292 [2024-11-20 18:29:55.905229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.910 ms 00:20:37.292 [2024-11-20 18:29:55.905252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.554 [2024-11-20 18:29:55.930421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.554 [2024-11-20 18:29:55.930583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:37.554 [2024-11-20 18:29:55.930643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.105 ms 00:20:37.554 [2024-11-20 18:29:55.930664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.554 [2024-11-20 18:29:55.955530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.554 [2024-11-20 18:29:55.955695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:37.554 [2024-11-20 18:29:55.955755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.658 ms 00:20:37.554 [2024-11-20 18:29:55.955776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.554 [2024-11-20 18:29:55.955827] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:37.554 [2024-11-20 18:29:55.955857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:37.554 [2024-11-20 18:29:55.955890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:37.554 [2024-11-20 18:29:55.955918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:37.554 [2024-11-20 18:29:55.955947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:37.554 [2024-11-20 18:29:55.956050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:37.554 [2024-11-20 18:29:55.956081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:37.554 [2024-11-20 18:29:55.956135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:37.554 [2024-11-20 18:29:55.956165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:37.554 [2024-11-20 18:29:55.956595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:37.554 [2024-11-20 18:29:55.956620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:37.554 [2024-11-20 18:29:55.956630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:37.554 [2024-11-20 18:29:55.956640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:37.554 [2024-11-20 18:29:55.956648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:37.554 [2024-11-20 18:29:55.956657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:37.554 [2024-11-20 18:29:55.956665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:37.554 [2024-11-20 18:29:55.956673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:37.554 [2024-11-20 18:29:55.956680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:37.554 [2024-11-20 18:29:55.956688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:37.554 [2024-11-20 18:29:55.956695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:37.554 [2024-11-20 18:29:55.956703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:37.554 [2024-11-20 18:29:55.956710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:37.554 [2024-11-20 18:29:55.956719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:37.554 [2024-11-20 18:29:55.956727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:37.554 [2024-11-20 18:29:55.956734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:37.554 [2024-11-20 18:29:55.956741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:37.555 [2024-11-20 18:29:55.956763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:37.555 [2024-11-20 18:29:55.956771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:37.555 [2024-11-20 18:29:55.956778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:37.555 [2024-11-20 18:29:55.956786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:37.555 [2024-11-20 18:29:55.956793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:37.555 [2024-11-20 18:29:55.956800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:37.555 [2024-11-20 18:29:55.956807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:37.555 [2024-11-20 18:29:55.956815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:37.555 [2024-11-20 18:29:55.956822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:37.555 [2024-11-20 18:29:55.956830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:37.555 [2024-11-20 18:29:55.956838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:37.555 [2024-11-20 18:29:55.956845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:37.555 [2024-11-20 18:29:55.956853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:37.555 [2024-11-20 18:29:55.956863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:37.555 [2024-11-20 18:29:55.956871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:37.555 [2024-11-20 18:29:55.956878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:37.555 [2024-11-20 18:29:55.956889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:37.555 [2024-11-20 18:29:55.956898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:37.555 [2024-11-20 18:29:55.956906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:37.555 [2024-11-20 18:29:55.956913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:37.555 [2024-11-20 18:29:55.956921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:37.555 [2024-11-20 18:29:55.956928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:37.555 [2024-11-20 18:29:55.956936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:37.555 [2024-11-20 18:29:55.956944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:37.555 [2024-11-20 18:29:55.956951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:37.555 [2024-11-20 18:29:55.956958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:37.555 [2024-11-20 18:29:55.956966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:37.555 [2024-11-20 18:29:55.956974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:37.555 [2024-11-20 18:29:55.956982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:37.555 [2024-11-20 18:29:55.956989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:37.555 [2024-11-20 18:29:55.956996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:37.555 [2024-11-20 18:29:55.957004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:37.555 [2024-11-20 18:29:55.957011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:37.555 [2024-11-20 18:29:55.957019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:37.555 [2024-11-20 18:29:55.957026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:37.555 [2024-11-20 18:29:55.957035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:37.555 [2024-11-20 18:29:55.957042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:37.555 [2024-11-20 18:29:55.957050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:37.555 [2024-11-20 18:29:55.957057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:37.555 [2024-11-20 18:29:55.957065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:37.555 [2024-11-20 18:29:55.957073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:37.555 [2024-11-20 18:29:55.957080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:37.555 [2024-11-20 18:29:55.957089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:37.555 [2024-11-20 18:29:55.957115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:37.555 [2024-11-20 18:29:55.957123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:37.555 [2024-11-20 18:29:55.957131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:37.555 [2024-11-20 18:29:55.957138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:37.555 [2024-11-20 18:29:55.957145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:37.555 [2024-11-20 18:29:55.957154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:37.555 [2024-11-20 18:29:55.957163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:37.555 [2024-11-20 18:29:55.957170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:37.555 [2024-11-20 18:29:55.957179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:37.555 [2024-11-20 18:29:55.957187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:37.555 [2024-11-20 18:29:55.957194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:37.555 [2024-11-20 18:29:55.957202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:37.555 [2024-11-20 18:29:55.957210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:37.555 [2024-11-20 18:29:55.957218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:37.555 [2024-11-20 18:29:55.957226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:37.555 [2024-11-20 18:29:55.957233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:37.555 [2024-11-20 18:29:55.957242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:37.555 [2024-11-20 18:29:55.957259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:37.555 [2024-11-20 18:29:55.957266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:37.555 [2024-11-20 18:29:55.957274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:37.555 [2024-11-20 18:29:55.957281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:37.555 [2024-11-20 18:29:55.957288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:37.555 [2024-11-20 18:29:55.957295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:37.555 [2024-11-20 18:29:55.957304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:37.555 [2024-11-20 18:29:55.957311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:37.555 [2024-11-20 18:29:55.957318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:37.555 [2024-11-20 18:29:55.957326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:37.555 [2024-11-20 18:29:55.957344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:37.555 [2024-11-20 18:29:55.957351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:37.555 [2024-11-20 18:29:55.957359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:37.555 [2024-11-20 18:29:55.957366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:37.555 [2024-11-20 18:29:55.957375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:37.555 [2024-11-20 18:29:55.957391] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:37.555 [2024-11-20 18:29:55.957400] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: cbb96cf1-d64b-463e-90b3-2025057ee758 00:20:37.555 [2024-11-20 18:29:55.957408] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:37.555 [2024-11-20 18:29:55.957416] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:37.555 [2024-11-20 18:29:55.957424] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:37.555 [2024-11-20 18:29:55.957434] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:37.555 [2024-11-20 18:29:55.957442] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:37.555 [2024-11-20 18:29:55.957450] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:37.555 [2024-11-20 18:29:55.957458] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:37.555 [2024-11-20 18:29:55.957464] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:37.555 [2024-11-20 18:29:55.957471] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:37.555 [2024-11-20 18:29:55.957482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.555 [2024-11-20 18:29:55.957494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:37.555 [2024-11-20 18:29:55.957504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.655 ms 00:20:37.555 [2024-11-20 18:29:55.957512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.555 [2024-11-20 18:29:55.971063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.556 [2024-11-20 18:29:55.971145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:37.556 [2024-11-20 18:29:55.971158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.494 ms 00:20:37.556 [2024-11-20 18:29:55.971166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.556 [2024-11-20 18:29:55.971586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.556 [2024-11-20 18:29:55.971598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:37.556 [2024-11-20 18:29:55.971607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.379 ms 00:20:37.556 [2024-11-20 18:29:55.971615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.556 [2024-11-20 18:29:56.010820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.556 [2024-11-20 18:29:56.011012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:37.556 [2024-11-20 18:29:56.011033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.556 [2024-11-20 18:29:56.011042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.556 [2024-11-20 18:29:56.011158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.556 [2024-11-20 18:29:56.011170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:37.556 [2024-11-20 18:29:56.011179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.556 [2024-11-20 18:29:56.011187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.556 [2024-11-20 18:29:56.011240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.556 [2024-11-20 18:29:56.011250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:37.556 [2024-11-20 18:29:56.011258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.556 [2024-11-20 18:29:56.011266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.556 [2024-11-20 18:29:56.011284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.556 [2024-11-20 18:29:56.011297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:37.556 [2024-11-20 18:29:56.011305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.556 [2024-11-20 18:29:56.011313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.556 [2024-11-20 18:29:56.096989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.556 [2024-11-20 18:29:56.097047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:37.556 [2024-11-20 18:29:56.097060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.556 [2024-11-20 18:29:56.097069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.556 [2024-11-20 18:29:56.167294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.556 [2024-11-20 18:29:56.167352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:37.556 [2024-11-20 18:29:56.167365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.556 [2024-11-20 18:29:56.167374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.556 [2024-11-20 18:29:56.167435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.556 [2024-11-20 18:29:56.167444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:37.556 [2024-11-20 18:29:56.167454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.556 [2024-11-20 18:29:56.167463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.556 [2024-11-20 18:29:56.167496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.556 [2024-11-20 18:29:56.167506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:37.556 [2024-11-20 18:29:56.167522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.556 [2024-11-20 18:29:56.167530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.556 [2024-11-20 18:29:56.167631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.556 [2024-11-20 18:29:56.167641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:37.556 [2024-11-20 18:29:56.167652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.556 [2024-11-20 18:29:56.167660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.556 [2024-11-20 18:29:56.167701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.556 [2024-11-20 18:29:56.167714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:37.556 [2024-11-20 18:29:56.167723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.556 [2024-11-20 18:29:56.167734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.556 [2024-11-20 18:29:56.167784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.556 [2024-11-20 18:29:56.167795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:37.556 [2024-11-20 18:29:56.167804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.556 [2024-11-20 18:29:56.167813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.556 [2024-11-20 18:29:56.167865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.556 [2024-11-20 18:29:56.167875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:37.556 [2024-11-20 18:29:56.167888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.556 [2024-11-20 18:29:56.167896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.556 [2024-11-20 18:29:56.168057] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 378.581 ms, result 0 00:20:38.497 00:20:38.497 00:20:38.497 18:29:56 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:20:38.497 18:29:56 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=76783 00:20:38.497 18:29:56 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 76783 00:20:38.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:38.497 18:29:56 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 76783 ']' 00:20:38.497 18:29:56 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:38.497 18:29:56 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:38.497 18:29:56 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:38.497 18:29:56 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:38.497 18:29:56 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:38.497 [2024-11-20 18:29:57.014626] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:20:38.497 [2024-11-20 18:29:57.014773] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76783 ] 00:20:38.757 [2024-11-20 18:29:57.177146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:38.757 [2024-11-20 18:29:57.298776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:39.700 18:29:57 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:39.701 18:29:57 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:20:39.701 18:29:57 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:20:39.701 [2024-11-20 18:29:58.215408] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:39.701 [2024-11-20 18:29:58.215485] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:39.963 [2024-11-20 18:29:58.375189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.963 [2024-11-20 18:29:58.375252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:39.963 [2024-11-20 18:29:58.375269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:39.963 [2024-11-20 18:29:58.375278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.963 [2024-11-20 18:29:58.378457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.963 [2024-11-20 18:29:58.378510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:39.963 [2024-11-20 18:29:58.378524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.155 ms 00:20:39.963 [2024-11-20 18:29:58.378532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.963 [2024-11-20 18:29:58.378669] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:39.963 [2024-11-20 18:29:58.379711] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:39.963 [2024-11-20 18:29:58.379778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.963 [2024-11-20 18:29:58.379789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:39.963 [2024-11-20 18:29:58.379801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.120 ms 00:20:39.963 [2024-11-20 18:29:58.379810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.963 [2024-11-20 18:29:58.381721] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:39.963 [2024-11-20 18:29:58.396148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.963 [2024-11-20 18:29:58.396206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:39.963 [2024-11-20 18:29:58.396222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.437 ms 00:20:39.963 [2024-11-20 18:29:58.396233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.964 [2024-11-20 18:29:58.396357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.964 [2024-11-20 18:29:58.396372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:39.964 [2024-11-20 18:29:58.396381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:20:39.964 [2024-11-20 18:29:58.396391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.964 [2024-11-20 18:29:58.404974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.964 [2024-11-20 18:29:58.405026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:39.964 [2024-11-20 18:29:58.405037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.525 ms 00:20:39.964 [2024-11-20 18:29:58.405048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.964 [2024-11-20 18:29:58.405204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.964 [2024-11-20 18:29:58.405219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:39.964 [2024-11-20 18:29:58.405228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:20:39.964 [2024-11-20 18:29:58.405238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.964 [2024-11-20 18:29:58.405275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.964 [2024-11-20 18:29:58.405286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:39.964 [2024-11-20 18:29:58.405294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:39.964 [2024-11-20 18:29:58.405304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.964 [2024-11-20 18:29:58.405329] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:39.964 [2024-11-20 18:29:58.409469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.964 [2024-11-20 18:29:58.409510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:39.964 [2024-11-20 18:29:58.409523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.143 ms 00:20:39.964 [2024-11-20 18:29:58.409531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.964 [2024-11-20 18:29:58.409611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.964 [2024-11-20 18:29:58.409621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:39.964 [2024-11-20 18:29:58.409632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:39.964 [2024-11-20 18:29:58.409643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.964 [2024-11-20 18:29:58.409666] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:39.964 [2024-11-20 18:29:58.409687] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:39.964 [2024-11-20 18:29:58.409731] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:39.964 [2024-11-20 18:29:58.409746] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:39.964 [2024-11-20 18:29:58.409856] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:39.964 [2024-11-20 18:29:58.409867] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:39.964 [2024-11-20 18:29:58.409882] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:39.964 [2024-11-20 18:29:58.409895] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:39.964 [2024-11-20 18:29:58.409910] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:39.964 [2024-11-20 18:29:58.409919] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:39.964 [2024-11-20 18:29:58.409929] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:39.964 [2024-11-20 18:29:58.409937] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:39.964 [2024-11-20 18:29:58.409948] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:39.964 [2024-11-20 18:29:58.409957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.964 [2024-11-20 18:29:58.409967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:39.964 [2024-11-20 18:29:58.409975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.296 ms 00:20:39.964 [2024-11-20 18:29:58.409984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.964 [2024-11-20 18:29:58.410077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.964 [2024-11-20 18:29:58.410088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:39.964 [2024-11-20 18:29:58.410114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:20:39.964 [2024-11-20 18:29:58.410123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.964 [2024-11-20 18:29:58.410229] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:39.964 [2024-11-20 18:29:58.410243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:39.964 [2024-11-20 18:29:58.410251] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:39.964 [2024-11-20 18:29:58.410261] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:39.964 [2024-11-20 18:29:58.410269] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:39.964 [2024-11-20 18:29:58.410279] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:39.964 [2024-11-20 18:29:58.410287] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:39.964 [2024-11-20 18:29:58.410300] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:39.964 [2024-11-20 18:29:58.410307] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:39.964 [2024-11-20 18:29:58.410317] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:39.964 [2024-11-20 18:29:58.410324] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:39.964 [2024-11-20 18:29:58.410333] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:39.964 [2024-11-20 18:29:58.410340] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:39.964 [2024-11-20 18:29:58.410349] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:39.964 [2024-11-20 18:29:58.410355] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:39.964 [2024-11-20 18:29:58.410364] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:39.964 [2024-11-20 18:29:58.410373] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:39.964 [2024-11-20 18:29:58.410382] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:39.964 [2024-11-20 18:29:58.410390] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:39.964 [2024-11-20 18:29:58.410399] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:39.964 [2024-11-20 18:29:58.410413] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:39.964 [2024-11-20 18:29:58.410421] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:39.964 [2024-11-20 18:29:58.410427] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:39.964 [2024-11-20 18:29:58.410439] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:39.964 [2024-11-20 18:29:58.410445] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:39.964 [2024-11-20 18:29:58.410453] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:39.964 [2024-11-20 18:29:58.410461] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:39.964 [2024-11-20 18:29:58.410469] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:39.964 [2024-11-20 18:29:58.410476] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:39.964 [2024-11-20 18:29:58.410485] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:39.964 [2024-11-20 18:29:58.410491] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:39.964 [2024-11-20 18:29:58.410502] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:39.964 [2024-11-20 18:29:58.410508] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:39.964 [2024-11-20 18:29:58.410516] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:39.964 [2024-11-20 18:29:58.410523] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:39.964 [2024-11-20 18:29:58.410532] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:39.964 [2024-11-20 18:29:58.410539] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:39.964 [2024-11-20 18:29:58.410547] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:39.964 [2024-11-20 18:29:58.410554] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:39.964 [2024-11-20 18:29:58.410564] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:39.964 [2024-11-20 18:29:58.410571] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:39.964 [2024-11-20 18:29:58.410581] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:39.964 [2024-11-20 18:29:58.410588] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:39.964 [2024-11-20 18:29:58.410597] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:39.964 [2024-11-20 18:29:58.410605] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:39.964 [2024-11-20 18:29:58.410616] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:39.964 [2024-11-20 18:29:58.410625] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:39.964 [2024-11-20 18:29:58.410635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:39.964 [2024-11-20 18:29:58.410644] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:39.964 [2024-11-20 18:29:58.410654] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:39.964 [2024-11-20 18:29:58.410661] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:39.964 [2024-11-20 18:29:58.410670] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:39.964 [2024-11-20 18:29:58.410676] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:39.964 [2024-11-20 18:29:58.410687] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:39.964 [2024-11-20 18:29:58.410697] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:39.964 [2024-11-20 18:29:58.410711] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:39.964 [2024-11-20 18:29:58.410718] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:39.964 [2024-11-20 18:29:58.410727] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:39.965 [2024-11-20 18:29:58.410735] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:39.965 [2024-11-20 18:29:58.410744] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:39.965 [2024-11-20 18:29:58.410752] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:39.965 [2024-11-20 18:29:58.410760] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:39.965 [2024-11-20 18:29:58.410768] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:39.965 [2024-11-20 18:29:58.410777] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:39.965 [2024-11-20 18:29:58.410784] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:39.965 [2024-11-20 18:29:58.410792] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:39.965 [2024-11-20 18:29:58.410801] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:39.965 [2024-11-20 18:29:58.410810] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:39.965 [2024-11-20 18:29:58.410817] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:39.965 [2024-11-20 18:29:58.410826] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:39.965 [2024-11-20 18:29:58.410835] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:39.965 [2024-11-20 18:29:58.410849] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:39.965 [2024-11-20 18:29:58.410857] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:39.965 [2024-11-20 18:29:58.410865] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:39.965 [2024-11-20 18:29:58.410873] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:39.965 [2024-11-20 18:29:58.410882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.965 [2024-11-20 18:29:58.410890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:39.965 [2024-11-20 18:29:58.410900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.721 ms 00:20:39.965 [2024-11-20 18:29:58.410907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.965 [2024-11-20 18:29:58.443785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.965 [2024-11-20 18:29:58.443979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:39.965 [2024-11-20 18:29:58.444442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.814 ms 00:20:39.965 [2024-11-20 18:29:58.444500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.965 [2024-11-20 18:29:58.444744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.965 [2024-11-20 18:29:58.444805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:39.965 [2024-11-20 18:29:58.444831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:20:39.965 [2024-11-20 18:29:58.444953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.965 [2024-11-20 18:29:58.480288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.965 [2024-11-20 18:29:58.480471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:39.965 [2024-11-20 18:29:58.480966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.284 ms 00:20:39.965 [2024-11-20 18:29:58.481035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.965 [2024-11-20 18:29:58.481246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.965 [2024-11-20 18:29:58.481293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:39.965 [2024-11-20 18:29:58.481320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:39.965 [2024-11-20 18:29:58.481400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.965 [2024-11-20 18:29:58.481961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.965 [2024-11-20 18:29:58.482034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:39.965 [2024-11-20 18:29:58.482562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.512 ms 00:20:39.965 [2024-11-20 18:29:58.482620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.965 [2024-11-20 18:29:58.482803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.965 [2024-11-20 18:29:58.482956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:39.965 [2024-11-20 18:29:58.482985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:20:39.965 [2024-11-20 18:29:58.483006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.965 [2024-11-20 18:29:58.501185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.965 [2024-11-20 18:29:58.501352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:39.965 [2024-11-20 18:29:58.501375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.137 ms 00:20:39.965 [2024-11-20 18:29:58.501384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.965 [2024-11-20 18:29:58.515976] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:39.965 [2024-11-20 18:29:58.516026] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:39.965 [2024-11-20 18:29:58.516042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.965 [2024-11-20 18:29:58.516051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:39.965 [2024-11-20 18:29:58.516063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.526 ms 00:20:39.965 [2024-11-20 18:29:58.516072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.965 [2024-11-20 18:29:58.542079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.965 [2024-11-20 18:29:58.542278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:39.965 [2024-11-20 18:29:58.542306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.762 ms 00:20:39.965 [2024-11-20 18:29:58.542315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.965 [2024-11-20 18:29:58.555415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.965 [2024-11-20 18:29:58.555577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:39.965 [2024-11-20 18:29:58.555605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.000 ms 00:20:39.965 [2024-11-20 18:29:58.555613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.965 [2024-11-20 18:29:58.568295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.965 [2024-11-20 18:29:58.568340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:39.965 [2024-11-20 18:29:58.568356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.595 ms 00:20:39.965 [2024-11-20 18:29:58.568363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.965 [2024-11-20 18:29:58.569075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.965 [2024-11-20 18:29:58.569133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:39.965 [2024-11-20 18:29:58.569147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.589 ms 00:20:39.965 [2024-11-20 18:29:58.569155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.227 [2024-11-20 18:29:58.645886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.227 [2024-11-20 18:29:58.646176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:40.227 [2024-11-20 18:29:58.646209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 76.697 ms 00:20:40.227 [2024-11-20 18:29:58.646220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.227 [2024-11-20 18:29:58.657971] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:40.227 [2024-11-20 18:29:58.678055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.227 [2024-11-20 18:29:58.678133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:40.227 [2024-11-20 18:29:58.678151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.632 ms 00:20:40.227 [2024-11-20 18:29:58.678162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.227 [2024-11-20 18:29:58.678262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.227 [2024-11-20 18:29:58.678276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:40.227 [2024-11-20 18:29:58.678286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:40.227 [2024-11-20 18:29:58.678296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.227 [2024-11-20 18:29:58.678353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.227 [2024-11-20 18:29:58.678365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:40.227 [2024-11-20 18:29:58.678373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:20:40.227 [2024-11-20 18:29:58.678383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.227 [2024-11-20 18:29:58.678412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.227 [2024-11-20 18:29:58.678423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:40.227 [2024-11-20 18:29:58.678431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:40.227 [2024-11-20 18:29:58.678444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.227 [2024-11-20 18:29:58.678480] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:40.227 [2024-11-20 18:29:58.678495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.227 [2024-11-20 18:29:58.678503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:40.227 [2024-11-20 18:29:58.678516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:40.227 [2024-11-20 18:29:58.678524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.227 [2024-11-20 18:29:58.705124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.227 [2024-11-20 18:29:58.705301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:40.227 [2024-11-20 18:29:58.705329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.566 ms 00:20:40.227 [2024-11-20 18:29:58.705337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.227 [2024-11-20 18:29:58.705468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.227 [2024-11-20 18:29:58.705481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:40.227 [2024-11-20 18:29:58.705492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:20:40.227 [2024-11-20 18:29:58.705504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.227 [2024-11-20 18:29:58.706648] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:40.227 [2024-11-20 18:29:58.710270] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 331.114 ms, result 0 00:20:40.227 [2024-11-20 18:29:58.712926] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:40.227 Some configs were skipped because the RPC state that can call them passed over. 00:20:40.227 18:29:58 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:20:40.489 [2024-11-20 18:29:58.945945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.489 [2024-11-20 18:29:58.946018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:40.489 [2024-11-20 18:29:58.946034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.245 ms 00:20:40.489 [2024-11-20 18:29:58.946045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.489 [2024-11-20 18:29:58.946082] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.390 ms, result 0 00:20:40.489 true 00:20:40.489 18:29:58 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:20:40.749 [2024-11-20 18:29:59.157608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.749 [2024-11-20 18:29:59.157684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:40.749 [2024-11-20 18:29:59.157702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.653 ms 00:20:40.749 [2024-11-20 18:29:59.157710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.749 [2024-11-20 18:29:59.157753] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.808 ms, result 0 00:20:40.749 true 00:20:40.749 18:29:59 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 76783 00:20:40.749 18:29:59 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76783 ']' 00:20:40.749 18:29:59 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76783 00:20:40.749 18:29:59 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:20:40.749 18:29:59 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:40.749 18:29:59 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76783 00:20:40.749 killing process with pid 76783 00:20:40.749 18:29:59 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:40.749 18:29:59 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:40.749 18:29:59 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76783' 00:20:40.749 18:29:59 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 76783 00:20:40.749 18:29:59 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 76783 00:20:41.316 [2024-11-20 18:29:59.833381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.316 [2024-11-20 18:29:59.833423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:41.316 [2024-11-20 18:29:59.833433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:41.316 [2024-11-20 18:29:59.833440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.316 [2024-11-20 18:29:59.833457] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:41.316 [2024-11-20 18:29:59.835724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.316 [2024-11-20 18:29:59.835751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:41.316 [2024-11-20 18:29:59.835764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.253 ms 00:20:41.316 [2024-11-20 18:29:59.835770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.316 [2024-11-20 18:29:59.836022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.316 [2024-11-20 18:29:59.836029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:41.316 [2024-11-20 18:29:59.836037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.232 ms 00:20:41.316 [2024-11-20 18:29:59.836049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.316 [2024-11-20 18:29:59.839401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.316 [2024-11-20 18:29:59.839426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:41.316 [2024-11-20 18:29:59.839437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.336 ms 00:20:41.316 [2024-11-20 18:29:59.839443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.316 [2024-11-20 18:29:59.844699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.316 [2024-11-20 18:29:59.844812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:41.316 [2024-11-20 18:29:59.844828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.227 ms 00:20:41.316 [2024-11-20 18:29:59.844834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.316 [2024-11-20 18:29:59.852264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.316 [2024-11-20 18:29:59.852354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:41.316 [2024-11-20 18:29:59.852408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.383 ms 00:20:41.316 [2024-11-20 18:29:59.852431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.316 [2024-11-20 18:29:59.858891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.316 [2024-11-20 18:29:59.858988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:41.316 [2024-11-20 18:29:59.859042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.421 ms 00:20:41.316 [2024-11-20 18:29:59.859061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.316 [2024-11-20 18:29:59.859182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.316 [2024-11-20 18:29:59.859205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:41.316 [2024-11-20 18:29:59.859223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:20:41.316 [2024-11-20 18:29:59.859268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.316 [2024-11-20 18:29:59.867160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.316 [2024-11-20 18:29:59.867248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:41.316 [2024-11-20 18:29:59.867296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.863 ms 00:20:41.316 [2024-11-20 18:29:59.867313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.316 [2024-11-20 18:29:59.874537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.316 [2024-11-20 18:29:59.874620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:41.316 [2024-11-20 18:29:59.874690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.186 ms 00:20:41.316 [2024-11-20 18:29:59.874710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.316 [2024-11-20 18:29:59.881639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.317 [2024-11-20 18:29:59.881724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:41.317 [2024-11-20 18:29:59.881767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.889 ms 00:20:41.317 [2024-11-20 18:29:59.881784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.317 [2024-11-20 18:29:59.888853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.317 [2024-11-20 18:29:59.888935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:41.317 [2024-11-20 18:29:59.888976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.011 ms 00:20:41.317 [2024-11-20 18:29:59.888992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.317 [2024-11-20 18:29:59.889034] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:41.317 [2024-11-20 18:29:59.889187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:41.317 [2024-11-20 18:29:59.889227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:41.317 [2024-11-20 18:29:59.889249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:41.317 [2024-11-20 18:29:59.889273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:41.317 [2024-11-20 18:29:59.889570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:41.317 [2024-11-20 18:29:59.889643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:41.317 [2024-11-20 18:29:59.889668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:41.317 [2024-11-20 18:29:59.889718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:41.317 [2024-11-20 18:29:59.889742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:41.317 [2024-11-20 18:29:59.889783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:41.317 [2024-11-20 18:29:59.889808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:41.317 [2024-11-20 18:29:59.889858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:41.317 [2024-11-20 18:29:59.889883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:41.317 [2024-11-20 18:29:59.889907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:41.317 [2024-11-20 18:29:59.889949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:41.317 [2024-11-20 18:29:59.889974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:41.317 [2024-11-20 18:29:59.890071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:41.317 [2024-11-20 18:29:59.890111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:41.317 [2024-11-20 18:29:59.890135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:41.317 [2024-11-20 18:29:59.890160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:41.317 [2024-11-20 18:29:59.890183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:41.317 [2024-11-20 18:29:59.890209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:41.317 [2024-11-20 18:29:59.890267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:41.317 [2024-11-20 18:29:59.890292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:41.317 [2024-11-20 18:29:59.890315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:41.317 [2024-11-20 18:29:59.890339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:41.317 [2024-11-20 18:29:59.890362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:41.317 [2024-11-20 18:29:59.890418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:41.317 [2024-11-20 18:29:59.890441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:41.317 [2024-11-20 18:29:59.890464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:41.317 [2024-11-20 18:29:59.890487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:41.317 [2024-11-20 18:29:59.890510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:41.317 [2024-11-20 18:29:59.890588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:41.317 [2024-11-20 18:29:59.890611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:41.317 [2024-11-20 18:29:59.890633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:41.317 [2024-11-20 18:29:59.890657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:41.317 [2024-11-20 18:29:59.890705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:41.317 [2024-11-20 18:29:59.890732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:41.317 [2024-11-20 18:29:59.890781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:41.317 [2024-11-20 18:29:59.890807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:41.317 [2024-11-20 18:29:59.890829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:41.317 [2024-11-20 18:29:59.890874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:41.317 [2024-11-20 18:29:59.890896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:41.317 [2024-11-20 18:29:59.890920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:41.317 [2024-11-20 18:29:59.890964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:41.317 [2024-11-20 18:29:59.890991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:41.317 [2024-11-20 18:29:59.891014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:41.317 [2024-11-20 18:29:59.891037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:41.317 [2024-11-20 18:29:59.891085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:41.317 [2024-11-20 18:29:59.891118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:41.317 [2024-11-20 18:29:59.891167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:41.317 [2024-11-20 18:29:59.891192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:41.317 [2024-11-20 18:29:59.891231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:41.317 [2024-11-20 18:29:59.891258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:41.317 [2024-11-20 18:29:59.891281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:41.317 [2024-11-20 18:29:59.891329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:41.317 [2024-11-20 18:29:59.891353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:41.317 [2024-11-20 18:29:59.891376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:41.317 [2024-11-20 18:29:59.891418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:41.317 [2024-11-20 18:29:59.891460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:41.317 [2024-11-20 18:29:59.891501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:41.317 [2024-11-20 18:29:59.891527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:41.317 [2024-11-20 18:29:59.891550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:41.317 [2024-11-20 18:29:59.891573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:41.317 [2024-11-20 18:29:59.891622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:41.317 [2024-11-20 18:29:59.891648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:41.317 [2024-11-20 18:29:59.891670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:41.317 [2024-11-20 18:29:59.891693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:41.317 [2024-11-20 18:29:59.891715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:41.317 [2024-11-20 18:29:59.891767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:41.317 [2024-11-20 18:29:59.891809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:41.317 [2024-11-20 18:29:59.891837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:41.317 [2024-11-20 18:29:59.891889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:41.317 [2024-11-20 18:29:59.891915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:41.317 [2024-11-20 18:29:59.891937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:41.317 [2024-11-20 18:29:59.892031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:41.317 [2024-11-20 18:29:59.892054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:41.317 [2024-11-20 18:29:59.892077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:41.317 [2024-11-20 18:29:59.892108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:41.317 [2024-11-20 18:29:59.892134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:41.317 [2024-11-20 18:29:59.892156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:41.317 [2024-11-20 18:29:59.892253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:41.317 [2024-11-20 18:29:59.892278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:41.318 [2024-11-20 18:29:59.892324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:41.318 [2024-11-20 18:29:59.892347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:41.318 [2024-11-20 18:29:59.892372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:41.318 [2024-11-20 18:29:59.892432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:41.318 [2024-11-20 18:29:59.892457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:41.318 [2024-11-20 18:29:59.892480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:41.318 [2024-11-20 18:29:59.892496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:41.318 [2024-11-20 18:29:59.892503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:41.318 [2024-11-20 18:29:59.892510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:41.318 [2024-11-20 18:29:59.892515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:41.318 [2024-11-20 18:29:59.892522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:41.318 [2024-11-20 18:29:59.892528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:41.318 [2024-11-20 18:29:59.892534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:41.318 [2024-11-20 18:29:59.892540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:41.318 [2024-11-20 18:29:59.892547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:41.318 [2024-11-20 18:29:59.892553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:41.318 [2024-11-20 18:29:59.892561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:41.318 [2024-11-20 18:29:59.892573] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:41.318 [2024-11-20 18:29:59.892584] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: cbb96cf1-d64b-463e-90b3-2025057ee758 00:20:41.318 [2024-11-20 18:29:59.892594] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:41.318 [2024-11-20 18:29:59.892603] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:41.318 [2024-11-20 18:29:59.892608] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:41.318 [2024-11-20 18:29:59.892615] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:41.318 [2024-11-20 18:29:59.892620] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:41.318 [2024-11-20 18:29:59.892628] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:41.318 [2024-11-20 18:29:59.892633] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:41.318 [2024-11-20 18:29:59.892639] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:41.318 [2024-11-20 18:29:59.892644] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:41.318 [2024-11-20 18:29:59.892653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.318 [2024-11-20 18:29:59.892659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:41.318 [2024-11-20 18:29:59.892667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.620 ms 00:20:41.318 [2024-11-20 18:29:59.892673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.318 [2024-11-20 18:29:59.902571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.318 [2024-11-20 18:29:59.902661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:41.318 [2024-11-20 18:29:59.902677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.869 ms 00:20:41.318 [2024-11-20 18:29:59.902683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.318 [2024-11-20 18:29:59.902981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.318 [2024-11-20 18:29:59.902990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:41.318 [2024-11-20 18:29:59.902998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.253 ms 00:20:41.318 [2024-11-20 18:29:59.903005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.318 [2024-11-20 18:29:59.937908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:41.318 [2024-11-20 18:29:59.937999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:41.318 [2024-11-20 18:29:59.938013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:41.318 [2024-11-20 18:29:59.938019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.318 [2024-11-20 18:29:59.938112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:41.318 [2024-11-20 18:29:59.938120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:41.318 [2024-11-20 18:29:59.938128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:41.318 [2024-11-20 18:29:59.938136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.318 [2024-11-20 18:29:59.938176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:41.318 [2024-11-20 18:29:59.938183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:41.318 [2024-11-20 18:29:59.938191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:41.318 [2024-11-20 18:29:59.938197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.318 [2024-11-20 18:29:59.938212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:41.318 [2024-11-20 18:29:59.938219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:41.318 [2024-11-20 18:29:59.938226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:41.318 [2024-11-20 18:29:59.938231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.576 [2024-11-20 18:29:59.996911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:41.576 [2024-11-20 18:29:59.997030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:41.576 [2024-11-20 18:29:59.997046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:41.576 [2024-11-20 18:29:59.997052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.576 [2024-11-20 18:30:00.047477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:41.576 [2024-11-20 18:30:00.047585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:41.576 [2024-11-20 18:30:00.047627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:41.576 [2024-11-20 18:30:00.047648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.576 [2024-11-20 18:30:00.047722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:41.576 [2024-11-20 18:30:00.047768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:41.576 [2024-11-20 18:30:00.047815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:41.576 [2024-11-20 18:30:00.047834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.576 [2024-11-20 18:30:00.047869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:41.576 [2024-11-20 18:30:00.047887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:41.576 [2024-11-20 18:30:00.047958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:41.576 [2024-11-20 18:30:00.047976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.576 [2024-11-20 18:30:00.048065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:41.576 [2024-11-20 18:30:00.048086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:41.576 [2024-11-20 18:30:00.048171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:41.576 [2024-11-20 18:30:00.048190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.576 [2024-11-20 18:30:00.048234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:41.576 [2024-11-20 18:30:00.048299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:41.576 [2024-11-20 18:30:00.048319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:41.576 [2024-11-20 18:30:00.048335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.576 [2024-11-20 18:30:00.048378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:41.576 [2024-11-20 18:30:00.048431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:41.576 [2024-11-20 18:30:00.048453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:41.576 [2024-11-20 18:30:00.048469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.576 [2024-11-20 18:30:00.048518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:41.576 [2024-11-20 18:30:00.048537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:41.576 [2024-11-20 18:30:00.048555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:41.576 [2024-11-20 18:30:00.048571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.576 [2024-11-20 18:30:00.048693] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 215.291 ms, result 0 00:20:42.142 18:30:00 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:42.142 [2024-11-20 18:30:00.624081] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:20:42.142 [2024-11-20 18:30:00.624223] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76830 ] 00:20:42.401 [2024-11-20 18:30:00.782165] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:42.401 [2024-11-20 18:30:00.868200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:42.661 [2024-11-20 18:30:01.076152] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:42.661 [2024-11-20 18:30:01.076339] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:42.661 [2024-11-20 18:30:01.227798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.661 [2024-11-20 18:30:01.227942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:42.661 [2024-11-20 18:30:01.227998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:42.661 [2024-11-20 18:30:01.228018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.661 [2024-11-20 18:30:01.230150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.661 [2024-11-20 18:30:01.230253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:42.661 [2024-11-20 18:30:01.230303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.104 ms 00:20:42.661 [2024-11-20 18:30:01.230321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.661 [2024-11-20 18:30:01.230389] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:42.661 [2024-11-20 18:30:01.231328] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:42.661 [2024-11-20 18:30:01.231353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.661 [2024-11-20 18:30:01.231361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:42.661 [2024-11-20 18:30:01.231368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.970 ms 00:20:42.661 [2024-11-20 18:30:01.231375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.661 [2024-11-20 18:30:01.232383] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:42.661 [2024-11-20 18:30:01.242061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.661 [2024-11-20 18:30:01.242107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:42.661 [2024-11-20 18:30:01.242117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.680 ms 00:20:42.661 [2024-11-20 18:30:01.242125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.661 [2024-11-20 18:30:01.242198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.662 [2024-11-20 18:30:01.242206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:42.662 [2024-11-20 18:30:01.242213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:42.662 [2024-11-20 18:30:01.242219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.662 [2024-11-20 18:30:01.246743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.662 [2024-11-20 18:30:01.246768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:42.662 [2024-11-20 18:30:01.246775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.495 ms 00:20:42.662 [2024-11-20 18:30:01.246781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.662 [2024-11-20 18:30:01.246855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.662 [2024-11-20 18:30:01.246863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:42.662 [2024-11-20 18:30:01.246869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:20:42.662 [2024-11-20 18:30:01.246875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.662 [2024-11-20 18:30:01.246891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.662 [2024-11-20 18:30:01.246900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:42.662 [2024-11-20 18:30:01.246906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:42.662 [2024-11-20 18:30:01.246911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.662 [2024-11-20 18:30:01.246928] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:42.662 [2024-11-20 18:30:01.249706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.662 [2024-11-20 18:30:01.249811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:42.662 [2024-11-20 18:30:01.249823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.782 ms 00:20:42.662 [2024-11-20 18:30:01.249829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.662 [2024-11-20 18:30:01.249858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.662 [2024-11-20 18:30:01.249864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:42.662 [2024-11-20 18:30:01.249871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:42.662 [2024-11-20 18:30:01.249877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.662 [2024-11-20 18:30:01.249890] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:42.662 [2024-11-20 18:30:01.249906] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:42.662 [2024-11-20 18:30:01.249932] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:42.662 [2024-11-20 18:30:01.249944] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:42.662 [2024-11-20 18:30:01.250024] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:42.662 [2024-11-20 18:30:01.250033] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:42.662 [2024-11-20 18:30:01.250041] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:42.662 [2024-11-20 18:30:01.250048] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:42.662 [2024-11-20 18:30:01.250057] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:42.662 [2024-11-20 18:30:01.250063] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:42.662 [2024-11-20 18:30:01.250068] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:42.662 [2024-11-20 18:30:01.250073] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:42.662 [2024-11-20 18:30:01.250079] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:42.662 [2024-11-20 18:30:01.250084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.662 [2024-11-20 18:30:01.250090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:42.662 [2024-11-20 18:30:01.250117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.196 ms 00:20:42.662 [2024-11-20 18:30:01.250123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.662 [2024-11-20 18:30:01.250191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.662 [2024-11-20 18:30:01.250198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:42.662 [2024-11-20 18:30:01.250206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:20:42.662 [2024-11-20 18:30:01.250212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.662 [2024-11-20 18:30:01.250289] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:42.662 [2024-11-20 18:30:01.250297] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:42.662 [2024-11-20 18:30:01.250303] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:42.662 [2024-11-20 18:30:01.250309] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:42.662 [2024-11-20 18:30:01.250315] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:42.662 [2024-11-20 18:30:01.250320] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:42.662 [2024-11-20 18:30:01.250325] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:42.662 [2024-11-20 18:30:01.250330] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:42.662 [2024-11-20 18:30:01.250337] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:42.662 [2024-11-20 18:30:01.250342] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:42.662 [2024-11-20 18:30:01.250347] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:42.662 [2024-11-20 18:30:01.250352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:42.662 [2024-11-20 18:30:01.250357] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:42.662 [2024-11-20 18:30:01.250367] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:42.662 [2024-11-20 18:30:01.250372] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:42.662 [2024-11-20 18:30:01.250377] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:42.662 [2024-11-20 18:30:01.250382] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:42.662 [2024-11-20 18:30:01.250388] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:42.662 [2024-11-20 18:30:01.250393] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:42.662 [2024-11-20 18:30:01.250398] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:42.662 [2024-11-20 18:30:01.250403] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:42.662 [2024-11-20 18:30:01.250408] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:42.662 [2024-11-20 18:30:01.250413] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:42.662 [2024-11-20 18:30:01.250418] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:42.662 [2024-11-20 18:30:01.250423] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:42.662 [2024-11-20 18:30:01.250428] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:42.662 [2024-11-20 18:30:01.250433] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:42.662 [2024-11-20 18:30:01.250437] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:42.662 [2024-11-20 18:30:01.250442] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:42.662 [2024-11-20 18:30:01.250447] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:42.662 [2024-11-20 18:30:01.250452] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:42.662 [2024-11-20 18:30:01.250457] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:42.662 [2024-11-20 18:30:01.250462] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:42.662 [2024-11-20 18:30:01.250467] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:42.662 [2024-11-20 18:30:01.250472] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:42.662 [2024-11-20 18:30:01.250476] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:42.662 [2024-11-20 18:30:01.250481] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:42.662 [2024-11-20 18:30:01.250487] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:42.662 [2024-11-20 18:30:01.250492] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:42.662 [2024-11-20 18:30:01.250497] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:42.662 [2024-11-20 18:30:01.250502] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:42.662 [2024-11-20 18:30:01.250507] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:42.662 [2024-11-20 18:30:01.250512] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:42.662 [2024-11-20 18:30:01.250517] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:42.662 [2024-11-20 18:30:01.250524] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:42.662 [2024-11-20 18:30:01.250529] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:42.662 [2024-11-20 18:30:01.250536] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:42.662 [2024-11-20 18:30:01.250542] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:42.662 [2024-11-20 18:30:01.250547] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:42.662 [2024-11-20 18:30:01.250552] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:42.662 [2024-11-20 18:30:01.250557] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:42.662 [2024-11-20 18:30:01.250562] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:42.662 [2024-11-20 18:30:01.250567] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:42.662 [2024-11-20 18:30:01.250574] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:42.662 [2024-11-20 18:30:01.250581] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:42.662 [2024-11-20 18:30:01.250587] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:42.662 [2024-11-20 18:30:01.250593] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:42.662 [2024-11-20 18:30:01.250598] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:42.663 [2024-11-20 18:30:01.250603] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:42.663 [2024-11-20 18:30:01.250608] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:42.663 [2024-11-20 18:30:01.250614] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:42.663 [2024-11-20 18:30:01.250619] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:42.663 [2024-11-20 18:30:01.250624] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:42.663 [2024-11-20 18:30:01.250629] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:42.663 [2024-11-20 18:30:01.250635] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:42.663 [2024-11-20 18:30:01.250640] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:42.663 [2024-11-20 18:30:01.250645] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:42.663 [2024-11-20 18:30:01.250650] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:42.663 [2024-11-20 18:30:01.250655] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:42.663 [2024-11-20 18:30:01.250661] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:42.663 [2024-11-20 18:30:01.250668] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:42.663 [2024-11-20 18:30:01.250674] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:42.663 [2024-11-20 18:30:01.250680] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:42.663 [2024-11-20 18:30:01.250685] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:42.663 [2024-11-20 18:30:01.250690] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:42.663 [2024-11-20 18:30:01.250697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.663 [2024-11-20 18:30:01.250702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:42.663 [2024-11-20 18:30:01.250710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.460 ms 00:20:42.663 [2024-11-20 18:30:01.250715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.663 [2024-11-20 18:30:01.271671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.663 [2024-11-20 18:30:01.271699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:42.663 [2024-11-20 18:30:01.271707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.917 ms 00:20:42.663 [2024-11-20 18:30:01.271713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.663 [2024-11-20 18:30:01.271809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.663 [2024-11-20 18:30:01.271819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:42.663 [2024-11-20 18:30:01.271826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:20:42.663 [2024-11-20 18:30:01.271831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.921 [2024-11-20 18:30:01.315935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.921 [2024-11-20 18:30:01.315967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:42.921 [2024-11-20 18:30:01.315977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.087 ms 00:20:42.921 [2024-11-20 18:30:01.315986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.921 [2024-11-20 18:30:01.316045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.921 [2024-11-20 18:30:01.316054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:42.921 [2024-11-20 18:30:01.316060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:42.921 [2024-11-20 18:30:01.316067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.921 [2024-11-20 18:30:01.316363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.921 [2024-11-20 18:30:01.316375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:42.921 [2024-11-20 18:30:01.316383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.282 ms 00:20:42.921 [2024-11-20 18:30:01.316389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.921 [2024-11-20 18:30:01.316497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.921 [2024-11-20 18:30:01.316504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:42.921 [2024-11-20 18:30:01.316510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:20:42.921 [2024-11-20 18:30:01.316516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.921 [2024-11-20 18:30:01.327403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.921 [2024-11-20 18:30:01.327428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:42.921 [2024-11-20 18:30:01.327437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.871 ms 00:20:42.921 [2024-11-20 18:30:01.327442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.921 [2024-11-20 18:30:01.337260] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:42.921 [2024-11-20 18:30:01.337371] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:42.921 [2024-11-20 18:30:01.337383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.921 [2024-11-20 18:30:01.337390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:42.921 [2024-11-20 18:30:01.337397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.851 ms 00:20:42.921 [2024-11-20 18:30:01.337402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.921 [2024-11-20 18:30:01.355686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.921 [2024-11-20 18:30:01.355719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:42.921 [2024-11-20 18:30:01.355727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.237 ms 00:20:42.921 [2024-11-20 18:30:01.355734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.921 [2024-11-20 18:30:01.364450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.921 [2024-11-20 18:30:01.364477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:42.921 [2024-11-20 18:30:01.364484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.661 ms 00:20:42.921 [2024-11-20 18:30:01.364489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.921 [2024-11-20 18:30:01.372839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.921 [2024-11-20 18:30:01.372866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:42.921 [2024-11-20 18:30:01.372873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.308 ms 00:20:42.921 [2024-11-20 18:30:01.372878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.921 [2024-11-20 18:30:01.373357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.921 [2024-11-20 18:30:01.373373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:42.921 [2024-11-20 18:30:01.373380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.414 ms 00:20:42.921 [2024-11-20 18:30:01.373386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.921 [2024-11-20 18:30:01.417711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.921 [2024-11-20 18:30:01.417750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:42.921 [2024-11-20 18:30:01.417760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.307 ms 00:20:42.921 [2024-11-20 18:30:01.417767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.921 [2024-11-20 18:30:01.425469] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:42.921 [2024-11-20 18:30:01.437085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.921 [2024-11-20 18:30:01.437121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:42.922 [2024-11-20 18:30:01.437130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.250 ms 00:20:42.922 [2024-11-20 18:30:01.437136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.922 [2024-11-20 18:30:01.437211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.922 [2024-11-20 18:30:01.437219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:42.922 [2024-11-20 18:30:01.437226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:42.922 [2024-11-20 18:30:01.437231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.922 [2024-11-20 18:30:01.437269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.922 [2024-11-20 18:30:01.437275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:42.922 [2024-11-20 18:30:01.437282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:20:42.922 [2024-11-20 18:30:01.437288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.922 [2024-11-20 18:30:01.437309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.922 [2024-11-20 18:30:01.437318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:42.922 [2024-11-20 18:30:01.437324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:42.922 [2024-11-20 18:30:01.437330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.922 [2024-11-20 18:30:01.437352] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:42.922 [2024-11-20 18:30:01.437359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.922 [2024-11-20 18:30:01.437364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:42.922 [2024-11-20 18:30:01.437371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:42.922 [2024-11-20 18:30:01.437377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.922 [2024-11-20 18:30:01.455091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.922 [2024-11-20 18:30:01.455126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:42.922 [2024-11-20 18:30:01.455135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.699 ms 00:20:42.922 [2024-11-20 18:30:01.455141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.922 [2024-11-20 18:30:01.455213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.922 [2024-11-20 18:30:01.455221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:42.922 [2024-11-20 18:30:01.455228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:20:42.922 [2024-11-20 18:30:01.455234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.922 [2024-11-20 18:30:01.455971] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:42.922 [2024-11-20 18:30:01.458310] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 227.941 ms, result 0 00:20:42.922 [2024-11-20 18:30:01.458785] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:42.922 [2024-11-20 18:30:01.473735] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:44.302  [2024-11-20T18:30:03.874Z] Copying: 32/256 [MB] (32 MBps) [2024-11-20T18:30:04.817Z] Copying: 52/256 [MB] (20 MBps) [2024-11-20T18:30:05.761Z] Copying: 66/256 [MB] (14 MBps) [2024-11-20T18:30:06.704Z] Copying: 84/256 [MB] (17 MBps) [2024-11-20T18:30:07.646Z] Copying: 105/256 [MB] (20 MBps) [2024-11-20T18:30:08.590Z] Copying: 126/256 [MB] (20 MBps) [2024-11-20T18:30:09.533Z] Copying: 148/256 [MB] (22 MBps) [2024-11-20T18:30:10.921Z] Copying: 170/256 [MB] (21 MBps) [2024-11-20T18:30:11.860Z] Copying: 183/256 [MB] (13 MBps) [2024-11-20T18:30:12.804Z] Copying: 202/256 [MB] (19 MBps) [2024-11-20T18:30:13.747Z] Copying: 225/256 [MB] (22 MBps) [2024-11-20T18:30:14.319Z] Copying: 246/256 [MB] (20 MBps) [2024-11-20T18:30:14.892Z] Copying: 256/256 [MB] (average 20 MBps)[2024-11-20 18:30:14.622020] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:56.263 [2024-11-20 18:30:14.632884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.263 [2024-11-20 18:30:14.632940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:56.263 [2024-11-20 18:30:14.632957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:56.263 [2024-11-20 18:30:14.632975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.263 [2024-11-20 18:30:14.633004] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:56.263 [2024-11-20 18:30:14.636045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.263 [2024-11-20 18:30:14.636086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:56.263 [2024-11-20 18:30:14.636112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.024 ms 00:20:56.263 [2024-11-20 18:30:14.636122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.263 [2024-11-20 18:30:14.636421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.263 [2024-11-20 18:30:14.636432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:56.263 [2024-11-20 18:30:14.636442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:20:56.263 [2024-11-20 18:30:14.636450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.263 [2024-11-20 18:30:14.640177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.263 [2024-11-20 18:30:14.640207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:56.263 [2024-11-20 18:30:14.640218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.708 ms 00:20:56.263 [2024-11-20 18:30:14.640226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.263 [2024-11-20 18:30:14.647228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.263 [2024-11-20 18:30:14.647441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:56.263 [2024-11-20 18:30:14.647464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.982 ms 00:20:56.263 [2024-11-20 18:30:14.647473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.263 [2024-11-20 18:30:14.673976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.263 [2024-11-20 18:30:14.674025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:56.263 [2024-11-20 18:30:14.674038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.424 ms 00:20:56.263 [2024-11-20 18:30:14.674047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.263 [2024-11-20 18:30:14.689949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.263 [2024-11-20 18:30:14.690194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:56.263 [2024-11-20 18:30:14.690219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.800 ms 00:20:56.263 [2024-11-20 18:30:14.690235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.263 [2024-11-20 18:30:14.690394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.263 [2024-11-20 18:30:14.690406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:56.263 [2024-11-20 18:30:14.690416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:20:56.263 [2024-11-20 18:30:14.690423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.263 [2024-11-20 18:30:14.717081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.263 [2024-11-20 18:30:14.717150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:56.263 [2024-11-20 18:30:14.717163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.629 ms 00:20:56.263 [2024-11-20 18:30:14.717171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.263 [2024-11-20 18:30:14.743689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.263 [2024-11-20 18:30:14.743738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:56.263 [2024-11-20 18:30:14.743751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.450 ms 00:20:56.263 [2024-11-20 18:30:14.743758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.263 [2024-11-20 18:30:14.769221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.263 [2024-11-20 18:30:14.769268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:56.263 [2024-11-20 18:30:14.769281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.393 ms 00:20:56.263 [2024-11-20 18:30:14.769289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.263 [2024-11-20 18:30:14.794381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.263 [2024-11-20 18:30:14.794431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:56.263 [2024-11-20 18:30:14.794444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.987 ms 00:20:56.263 [2024-11-20 18:30:14.794451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.263 [2024-11-20 18:30:14.794506] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:56.263 [2024-11-20 18:30:14.794522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:56.263 [2024-11-20 18:30:14.794532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:56.263 [2024-11-20 18:30:14.794541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:56.263 [2024-11-20 18:30:14.794549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:56.263 [2024-11-20 18:30:14.794557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:56.263 [2024-11-20 18:30:14.794564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:56.263 [2024-11-20 18:30:14.794572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:56.263 [2024-11-20 18:30:14.794580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:56.263 [2024-11-20 18:30:14.794587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:56.263 [2024-11-20 18:30:14.794595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:56.263 [2024-11-20 18:30:14.794602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:56.263 [2024-11-20 18:30:14.794609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:56.263 [2024-11-20 18:30:14.794617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:56.263 [2024-11-20 18:30:14.794624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:56.263 [2024-11-20 18:30:14.794632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:56.263 [2024-11-20 18:30:14.794639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:56.263 [2024-11-20 18:30:14.794646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:56.263 [2024-11-20 18:30:14.794653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:56.263 [2024-11-20 18:30:14.794661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:56.263 [2024-11-20 18:30:14.794668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:56.263 [2024-11-20 18:30:14.794676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:56.263 [2024-11-20 18:30:14.794683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:56.263 [2024-11-20 18:30:14.794691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:56.264 [2024-11-20 18:30:14.794698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:56.264 [2024-11-20 18:30:14.794705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:56.264 [2024-11-20 18:30:14.794713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:56.264 [2024-11-20 18:30:14.794723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:56.264 [2024-11-20 18:30:14.794730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:56.264 [2024-11-20 18:30:14.794738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:56.264 [2024-11-20 18:30:14.794747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:56.264 [2024-11-20 18:30:14.794755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:56.264 [2024-11-20 18:30:14.794763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:56.264 [2024-11-20 18:30:14.794771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:56.264 [2024-11-20 18:30:14.794779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:56.264 [2024-11-20 18:30:14.794787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:56.264 [2024-11-20 18:30:14.794794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:56.264 [2024-11-20 18:30:14.794802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:56.264 [2024-11-20 18:30:14.794810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:56.264 [2024-11-20 18:30:14.794817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:56.264 [2024-11-20 18:30:14.794825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:56.264 [2024-11-20 18:30:14.794833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:56.264 [2024-11-20 18:30:14.794840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:56.264 [2024-11-20 18:30:14.794849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:56.264 [2024-11-20 18:30:14.794857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:56.264 [2024-11-20 18:30:14.794864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:56.264 [2024-11-20 18:30:14.794873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:56.264 [2024-11-20 18:30:14.794886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:56.264 [2024-11-20 18:30:14.794894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:56.264 [2024-11-20 18:30:14.794902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:56.264 [2024-11-20 18:30:14.794909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:56.264 [2024-11-20 18:30:14.794917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:56.264 [2024-11-20 18:30:14.794924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:56.264 [2024-11-20 18:30:14.794931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:56.264 [2024-11-20 18:30:14.794939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:56.264 [2024-11-20 18:30:14.794946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:56.264 [2024-11-20 18:30:14.794955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:56.264 [2024-11-20 18:30:14.794963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:56.264 [2024-11-20 18:30:14.794970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:56.264 [2024-11-20 18:30:14.794978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:56.264 [2024-11-20 18:30:14.794985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:56.264 [2024-11-20 18:30:14.794992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:56.264 [2024-11-20 18:30:14.795001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:56.264 [2024-11-20 18:30:14.795010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:56.264 [2024-11-20 18:30:14.795018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:56.264 [2024-11-20 18:30:14.795026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:56.264 [2024-11-20 18:30:14.795033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:56.264 [2024-11-20 18:30:14.795042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:56.264 [2024-11-20 18:30:14.795050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:56.264 [2024-11-20 18:30:14.795058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:56.264 [2024-11-20 18:30:14.795066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:56.264 [2024-11-20 18:30:14.795074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:56.264 [2024-11-20 18:30:14.795081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:56.264 [2024-11-20 18:30:14.795089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:56.264 [2024-11-20 18:30:14.795123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:56.264 [2024-11-20 18:30:14.795131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:56.264 [2024-11-20 18:30:14.795140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:56.264 [2024-11-20 18:30:14.795148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:56.264 [2024-11-20 18:30:14.795156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:56.264 [2024-11-20 18:30:14.795164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:56.264 [2024-11-20 18:30:14.795172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:56.264 [2024-11-20 18:30:14.795180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:56.264 [2024-11-20 18:30:14.795188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:56.264 [2024-11-20 18:30:14.795196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:56.264 [2024-11-20 18:30:14.795205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:56.264 [2024-11-20 18:30:14.795214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:56.264 [2024-11-20 18:30:14.795222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:56.264 [2024-11-20 18:30:14.795230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:56.264 [2024-11-20 18:30:14.795238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:56.264 [2024-11-20 18:30:14.795245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:56.264 [2024-11-20 18:30:14.795253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:56.264 [2024-11-20 18:30:14.795284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:56.264 [2024-11-20 18:30:14.795292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:56.264 [2024-11-20 18:30:14.795299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:56.264 [2024-11-20 18:30:14.795309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:56.264 [2024-11-20 18:30:14.795317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:56.264 [2024-11-20 18:30:14.795336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:56.264 [2024-11-20 18:30:14.795344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:56.264 [2024-11-20 18:30:14.795352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:56.264 [2024-11-20 18:30:14.795359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:56.264 [2024-11-20 18:30:14.795367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:56.264 [2024-11-20 18:30:14.795384] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:56.264 [2024-11-20 18:30:14.795394] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: cbb96cf1-d64b-463e-90b3-2025057ee758 00:20:56.264 [2024-11-20 18:30:14.795402] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:56.264 [2024-11-20 18:30:14.795411] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:56.264 [2024-11-20 18:30:14.795419] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:56.264 [2024-11-20 18:30:14.795428] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:56.264 [2024-11-20 18:30:14.795436] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:56.264 [2024-11-20 18:30:14.795444] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:56.264 [2024-11-20 18:30:14.795452] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:56.264 [2024-11-20 18:30:14.795470] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:56.264 [2024-11-20 18:30:14.795476] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:56.264 [2024-11-20 18:30:14.795483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.264 [2024-11-20 18:30:14.795495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:56.265 [2024-11-20 18:30:14.795504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.978 ms 00:20:56.265 [2024-11-20 18:30:14.795512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.265 [2024-11-20 18:30:14.809785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.265 [2024-11-20 18:30:14.809829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:56.265 [2024-11-20 18:30:14.809842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.235 ms 00:20:56.265 [2024-11-20 18:30:14.809850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.265 [2024-11-20 18:30:14.810322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.265 [2024-11-20 18:30:14.810355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:56.265 [2024-11-20 18:30:14.810365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.428 ms 00:20:56.265 [2024-11-20 18:30:14.810374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.265 [2024-11-20 18:30:14.853066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:56.265 [2024-11-20 18:30:14.853145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:56.265 [2024-11-20 18:30:14.853157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:56.265 [2024-11-20 18:30:14.853166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.265 [2024-11-20 18:30:14.853290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:56.265 [2024-11-20 18:30:14.853302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:56.265 [2024-11-20 18:30:14.853311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:56.265 [2024-11-20 18:30:14.853319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.265 [2024-11-20 18:30:14.853384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:56.265 [2024-11-20 18:30:14.853394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:56.265 [2024-11-20 18:30:14.853403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:56.265 [2024-11-20 18:30:14.853411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.265 [2024-11-20 18:30:14.853430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:56.265 [2024-11-20 18:30:14.853443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:56.265 [2024-11-20 18:30:14.853452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:56.265 [2024-11-20 18:30:14.853460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.526 [2024-11-20 18:30:14.943852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:56.526 [2024-11-20 18:30:14.943924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:56.526 [2024-11-20 18:30:14.943939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:56.526 [2024-11-20 18:30:14.943948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.526 [2024-11-20 18:30:15.014069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:56.526 [2024-11-20 18:30:15.014147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:56.526 [2024-11-20 18:30:15.014160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:56.526 [2024-11-20 18:30:15.014169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.526 [2024-11-20 18:30:15.014255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:56.526 [2024-11-20 18:30:15.014266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:56.526 [2024-11-20 18:30:15.014295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:56.526 [2024-11-20 18:30:15.014304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.526 [2024-11-20 18:30:15.014337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:56.526 [2024-11-20 18:30:15.014347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:56.526 [2024-11-20 18:30:15.014361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:56.526 [2024-11-20 18:30:15.014369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.526 [2024-11-20 18:30:15.014510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:56.526 [2024-11-20 18:30:15.014522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:56.526 [2024-11-20 18:30:15.014531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:56.526 [2024-11-20 18:30:15.014539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.526 [2024-11-20 18:30:15.014575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:56.526 [2024-11-20 18:30:15.014584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:56.526 [2024-11-20 18:30:15.014593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:56.526 [2024-11-20 18:30:15.014604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.526 [2024-11-20 18:30:15.014651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:56.526 [2024-11-20 18:30:15.014661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:56.526 [2024-11-20 18:30:15.014671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:56.526 [2024-11-20 18:30:15.014680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.526 [2024-11-20 18:30:15.014730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:56.526 [2024-11-20 18:30:15.014741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:56.526 [2024-11-20 18:30:15.014753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:56.526 [2024-11-20 18:30:15.014762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.526 [2024-11-20 18:30:15.014923] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 382.036 ms, result 0 00:20:57.470 00:20:57.470 00:20:57.470 18:30:15 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:57.731 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:20:57.731 18:30:16 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:20:57.731 18:30:16 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:20:57.731 18:30:16 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:57.731 18:30:16 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:57.731 18:30:16 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:20:57.991 18:30:16 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:57.991 18:30:16 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 76783 00:20:57.991 18:30:16 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76783 ']' 00:20:57.991 Process with pid 76783 is not found 00:20:57.991 18:30:16 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76783 00:20:57.991 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (76783) - No such process 00:20:57.991 18:30:16 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 76783 is not found' 00:20:57.991 ************************************ 00:20:57.991 END TEST ftl_trim 00:20:57.991 ************************************ 00:20:57.991 00:20:57.991 real 1m7.443s 00:20:57.991 user 1m23.611s 00:20:57.991 sys 0m15.163s 00:20:57.991 18:30:16 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:57.991 18:30:16 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:57.991 18:30:16 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:20:57.991 18:30:16 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:57.991 18:30:16 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:57.991 18:30:16 ftl -- common/autotest_common.sh@10 -- # set +x 00:20:57.991 ************************************ 00:20:57.991 START TEST ftl_restore 00:20:57.991 ************************************ 00:20:57.991 18:30:16 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:20:57.991 * Looking for test storage... 00:20:57.991 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:20:57.991 18:30:16 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:57.991 18:30:16 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lcov --version 00:20:57.991 18:30:16 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:58.252 18:30:16 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:58.252 18:30:16 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:58.252 18:30:16 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:58.252 18:30:16 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:58.252 18:30:16 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:20:58.252 18:30:16 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:20:58.252 18:30:16 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:20:58.252 18:30:16 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:20:58.252 18:30:16 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:20:58.252 18:30:16 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:20:58.252 18:30:16 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:20:58.252 18:30:16 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:58.252 18:30:16 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:20:58.252 18:30:16 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:20:58.252 18:30:16 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:58.252 18:30:16 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:58.252 18:30:16 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:20:58.252 18:30:16 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:20:58.252 18:30:16 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:58.252 18:30:16 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:20:58.252 18:30:16 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:20:58.252 18:30:16 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:20:58.252 18:30:16 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:20:58.252 18:30:16 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:58.252 18:30:16 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:20:58.252 18:30:16 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:20:58.252 18:30:16 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:58.252 18:30:16 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:58.252 18:30:16 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:20:58.252 18:30:16 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:58.252 18:30:16 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:58.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:58.252 --rc genhtml_branch_coverage=1 00:20:58.252 --rc genhtml_function_coverage=1 00:20:58.252 --rc genhtml_legend=1 00:20:58.252 --rc geninfo_all_blocks=1 00:20:58.252 --rc geninfo_unexecuted_blocks=1 00:20:58.252 00:20:58.252 ' 00:20:58.252 18:30:16 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:58.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:58.252 --rc genhtml_branch_coverage=1 00:20:58.252 --rc genhtml_function_coverage=1 00:20:58.252 --rc genhtml_legend=1 00:20:58.252 --rc geninfo_all_blocks=1 00:20:58.252 --rc geninfo_unexecuted_blocks=1 00:20:58.252 00:20:58.252 ' 00:20:58.252 18:30:16 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:58.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:58.252 --rc genhtml_branch_coverage=1 00:20:58.252 --rc genhtml_function_coverage=1 00:20:58.252 --rc genhtml_legend=1 00:20:58.252 --rc geninfo_all_blocks=1 00:20:58.252 --rc geninfo_unexecuted_blocks=1 00:20:58.252 00:20:58.252 ' 00:20:58.252 18:30:16 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:58.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:58.252 --rc genhtml_branch_coverage=1 00:20:58.252 --rc genhtml_function_coverage=1 00:20:58.252 --rc genhtml_legend=1 00:20:58.252 --rc geninfo_all_blocks=1 00:20:58.252 --rc geninfo_unexecuted_blocks=1 00:20:58.252 00:20:58.252 ' 00:20:58.252 18:30:16 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:20:58.252 18:30:16 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:20:58.252 18:30:16 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:20:58.252 18:30:16 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:20:58.252 18:30:16 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:20:58.252 18:30:16 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:58.252 18:30:16 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:58.252 18:30:16 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:20:58.252 18:30:16 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:20:58.252 18:30:16 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:58.252 18:30:16 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:58.252 18:30:16 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:20:58.252 18:30:16 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:20:58.252 18:30:16 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:58.252 18:30:16 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:58.252 18:30:16 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:20:58.252 18:30:16 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:20:58.252 18:30:16 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:58.252 18:30:16 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:58.252 18:30:16 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:20:58.252 18:30:16 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:20:58.252 18:30:16 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:58.252 18:30:16 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:58.252 18:30:16 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:58.252 18:30:16 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:58.253 18:30:16 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:20:58.253 18:30:16 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:20:58.253 18:30:16 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:58.253 18:30:16 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:58.253 18:30:16 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:58.253 18:30:16 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:20:58.253 18:30:16 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.SW7ugTO1Mr 00:20:58.253 18:30:16 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:20:58.253 18:30:16 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:20:58.253 18:30:16 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:20:58.253 18:30:16 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:20:58.253 18:30:16 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:20:58.253 18:30:16 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:20:58.253 18:30:16 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:20:58.253 18:30:16 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:20:58.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:58.253 18:30:16 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=77063 00:20:58.253 18:30:16 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 77063 00:20:58.253 18:30:16 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 77063 ']' 00:20:58.253 18:30:16 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:58.253 18:30:16 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:58.253 18:30:16 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:58.253 18:30:16 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:58.253 18:30:16 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:20:58.253 18:30:16 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:58.253 [2024-11-20 18:30:16.762228] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:20:58.253 [2024-11-20 18:30:16.762378] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77063 ] 00:20:58.514 [2024-11-20 18:30:16.928160] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:58.514 [2024-11-20 18:30:17.049582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:59.455 18:30:17 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:59.455 18:30:17 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:20:59.455 18:30:17 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:20:59.455 18:30:17 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:20:59.455 18:30:17 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:20:59.455 18:30:17 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:20:59.455 18:30:17 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:20:59.455 18:30:17 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:20:59.455 18:30:18 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:20:59.455 18:30:18 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:20:59.455 18:30:18 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:20:59.455 18:30:18 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:20:59.455 18:30:18 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:59.455 18:30:18 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:59.455 18:30:18 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:59.455 18:30:18 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:20:59.715 18:30:18 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:59.715 { 00:20:59.715 "name": "nvme0n1", 00:20:59.715 "aliases": [ 00:20:59.715 "9aa08c4c-429d-48e1-abeb-66755e5e4dd8" 00:20:59.715 ], 00:20:59.715 "product_name": "NVMe disk", 00:20:59.715 "block_size": 4096, 00:20:59.715 "num_blocks": 1310720, 00:20:59.715 "uuid": "9aa08c4c-429d-48e1-abeb-66755e5e4dd8", 00:20:59.715 "numa_id": -1, 00:20:59.715 "assigned_rate_limits": { 00:20:59.715 "rw_ios_per_sec": 0, 00:20:59.715 "rw_mbytes_per_sec": 0, 00:20:59.715 "r_mbytes_per_sec": 0, 00:20:59.715 "w_mbytes_per_sec": 0 00:20:59.715 }, 00:20:59.715 "claimed": true, 00:20:59.715 "claim_type": "read_many_write_one", 00:20:59.715 "zoned": false, 00:20:59.715 "supported_io_types": { 00:20:59.715 "read": true, 00:20:59.715 "write": true, 00:20:59.715 "unmap": true, 00:20:59.715 "flush": true, 00:20:59.715 "reset": true, 00:20:59.715 "nvme_admin": true, 00:20:59.715 "nvme_io": true, 00:20:59.715 "nvme_io_md": false, 00:20:59.715 "write_zeroes": true, 00:20:59.715 "zcopy": false, 00:20:59.715 "get_zone_info": false, 00:20:59.715 "zone_management": false, 00:20:59.715 "zone_append": false, 00:20:59.715 "compare": true, 00:20:59.715 "compare_and_write": false, 00:20:59.715 "abort": true, 00:20:59.715 "seek_hole": false, 00:20:59.715 "seek_data": false, 00:20:59.715 "copy": true, 00:20:59.715 "nvme_iov_md": false 00:20:59.715 }, 00:20:59.715 "driver_specific": { 00:20:59.715 "nvme": [ 00:20:59.715 { 00:20:59.715 "pci_address": "0000:00:11.0", 00:20:59.715 "trid": { 00:20:59.715 "trtype": "PCIe", 00:20:59.715 "traddr": "0000:00:11.0" 00:20:59.715 }, 00:20:59.715 "ctrlr_data": { 00:20:59.715 "cntlid": 0, 00:20:59.715 "vendor_id": "0x1b36", 00:20:59.715 "model_number": "QEMU NVMe Ctrl", 00:20:59.715 "serial_number": "12341", 00:20:59.715 "firmware_revision": "8.0.0", 00:20:59.715 "subnqn": "nqn.2019-08.org.qemu:12341", 00:20:59.715 "oacs": { 00:20:59.715 "security": 0, 00:20:59.715 "format": 1, 00:20:59.715 "firmware": 0, 00:20:59.715 "ns_manage": 1 00:20:59.715 }, 00:20:59.715 "multi_ctrlr": false, 00:20:59.715 "ana_reporting": false 00:20:59.715 }, 00:20:59.715 "vs": { 00:20:59.715 "nvme_version": "1.4" 00:20:59.715 }, 00:20:59.715 "ns_data": { 00:20:59.715 "id": 1, 00:20:59.715 "can_share": false 00:20:59.715 } 00:20:59.715 } 00:20:59.715 ], 00:20:59.715 "mp_policy": "active_passive" 00:20:59.715 } 00:20:59.715 } 00:20:59.715 ]' 00:20:59.715 18:30:18 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:59.715 18:30:18 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:59.715 18:30:18 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:59.715 18:30:18 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:20:59.715 18:30:18 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:20:59.715 18:30:18 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:20:59.715 18:30:18 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:20:59.716 18:30:18 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:20:59.716 18:30:18 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:20:59.716 18:30:18 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:59.716 18:30:18 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:20:59.975 18:30:18 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=57f3bc0d-8824-4a36-b8c4-a17ca72e81ae 00:20:59.975 18:30:18 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:20:59.975 18:30:18 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 57f3bc0d-8824-4a36-b8c4-a17ca72e81ae 00:21:00.237 18:30:18 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:21:00.497 18:30:18 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=94c8dd5f-1fe7-45c0-823e-03198908fa29 00:21:00.497 18:30:18 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 94c8dd5f-1fe7-45c0-823e-03198908fa29 00:21:00.757 18:30:19 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=60a92b63-7bff-4916-a4fd-f0f5f7f6d660 00:21:00.757 18:30:19 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:21:00.757 18:30:19 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 60a92b63-7bff-4916-a4fd-f0f5f7f6d660 00:21:00.757 18:30:19 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:21:00.757 18:30:19 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:21:00.757 18:30:19 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=60a92b63-7bff-4916-a4fd-f0f5f7f6d660 00:21:00.757 18:30:19 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:21:00.757 18:30:19 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 60a92b63-7bff-4916-a4fd-f0f5f7f6d660 00:21:00.757 18:30:19 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=60a92b63-7bff-4916-a4fd-f0f5f7f6d660 00:21:00.757 18:30:19 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:00.757 18:30:19 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:21:00.757 18:30:19 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:21:00.757 18:30:19 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 60a92b63-7bff-4916-a4fd-f0f5f7f6d660 00:21:01.017 18:30:19 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:01.017 { 00:21:01.017 "name": "60a92b63-7bff-4916-a4fd-f0f5f7f6d660", 00:21:01.017 "aliases": [ 00:21:01.017 "lvs/nvme0n1p0" 00:21:01.017 ], 00:21:01.017 "product_name": "Logical Volume", 00:21:01.017 "block_size": 4096, 00:21:01.017 "num_blocks": 26476544, 00:21:01.017 "uuid": "60a92b63-7bff-4916-a4fd-f0f5f7f6d660", 00:21:01.017 "assigned_rate_limits": { 00:21:01.017 "rw_ios_per_sec": 0, 00:21:01.017 "rw_mbytes_per_sec": 0, 00:21:01.017 "r_mbytes_per_sec": 0, 00:21:01.017 "w_mbytes_per_sec": 0 00:21:01.017 }, 00:21:01.017 "claimed": false, 00:21:01.017 "zoned": false, 00:21:01.017 "supported_io_types": { 00:21:01.017 "read": true, 00:21:01.017 "write": true, 00:21:01.017 "unmap": true, 00:21:01.017 "flush": false, 00:21:01.017 "reset": true, 00:21:01.017 "nvme_admin": false, 00:21:01.017 "nvme_io": false, 00:21:01.017 "nvme_io_md": false, 00:21:01.017 "write_zeroes": true, 00:21:01.017 "zcopy": false, 00:21:01.017 "get_zone_info": false, 00:21:01.017 "zone_management": false, 00:21:01.018 "zone_append": false, 00:21:01.018 "compare": false, 00:21:01.018 "compare_and_write": false, 00:21:01.018 "abort": false, 00:21:01.018 "seek_hole": true, 00:21:01.018 "seek_data": true, 00:21:01.018 "copy": false, 00:21:01.018 "nvme_iov_md": false 00:21:01.018 }, 00:21:01.018 "driver_specific": { 00:21:01.018 "lvol": { 00:21:01.018 "lvol_store_uuid": "94c8dd5f-1fe7-45c0-823e-03198908fa29", 00:21:01.018 "base_bdev": "nvme0n1", 00:21:01.018 "thin_provision": true, 00:21:01.018 "num_allocated_clusters": 0, 00:21:01.018 "snapshot": false, 00:21:01.018 "clone": false, 00:21:01.018 "esnap_clone": false 00:21:01.018 } 00:21:01.018 } 00:21:01.018 } 00:21:01.018 ]' 00:21:01.018 18:30:19 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:01.018 18:30:19 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:21:01.018 18:30:19 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:01.018 18:30:19 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:01.018 18:30:19 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:01.018 18:30:19 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:21:01.018 18:30:19 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:21:01.018 18:30:19 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:21:01.018 18:30:19 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:21:01.279 18:30:19 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:21:01.279 18:30:19 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:21:01.279 18:30:19 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 60a92b63-7bff-4916-a4fd-f0f5f7f6d660 00:21:01.279 18:30:19 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=60a92b63-7bff-4916-a4fd-f0f5f7f6d660 00:21:01.279 18:30:19 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:01.279 18:30:19 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:21:01.279 18:30:19 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:21:01.279 18:30:19 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 60a92b63-7bff-4916-a4fd-f0f5f7f6d660 00:21:01.537 18:30:19 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:01.537 { 00:21:01.537 "name": "60a92b63-7bff-4916-a4fd-f0f5f7f6d660", 00:21:01.537 "aliases": [ 00:21:01.537 "lvs/nvme0n1p0" 00:21:01.537 ], 00:21:01.537 "product_name": "Logical Volume", 00:21:01.537 "block_size": 4096, 00:21:01.537 "num_blocks": 26476544, 00:21:01.537 "uuid": "60a92b63-7bff-4916-a4fd-f0f5f7f6d660", 00:21:01.537 "assigned_rate_limits": { 00:21:01.537 "rw_ios_per_sec": 0, 00:21:01.537 "rw_mbytes_per_sec": 0, 00:21:01.537 "r_mbytes_per_sec": 0, 00:21:01.537 "w_mbytes_per_sec": 0 00:21:01.537 }, 00:21:01.537 "claimed": false, 00:21:01.537 "zoned": false, 00:21:01.537 "supported_io_types": { 00:21:01.537 "read": true, 00:21:01.537 "write": true, 00:21:01.537 "unmap": true, 00:21:01.537 "flush": false, 00:21:01.537 "reset": true, 00:21:01.537 "nvme_admin": false, 00:21:01.537 "nvme_io": false, 00:21:01.537 "nvme_io_md": false, 00:21:01.537 "write_zeroes": true, 00:21:01.537 "zcopy": false, 00:21:01.537 "get_zone_info": false, 00:21:01.537 "zone_management": false, 00:21:01.537 "zone_append": false, 00:21:01.537 "compare": false, 00:21:01.537 "compare_and_write": false, 00:21:01.537 "abort": false, 00:21:01.537 "seek_hole": true, 00:21:01.537 "seek_data": true, 00:21:01.537 "copy": false, 00:21:01.537 "nvme_iov_md": false 00:21:01.537 }, 00:21:01.537 "driver_specific": { 00:21:01.538 "lvol": { 00:21:01.538 "lvol_store_uuid": "94c8dd5f-1fe7-45c0-823e-03198908fa29", 00:21:01.538 "base_bdev": "nvme0n1", 00:21:01.538 "thin_provision": true, 00:21:01.538 "num_allocated_clusters": 0, 00:21:01.538 "snapshot": false, 00:21:01.538 "clone": false, 00:21:01.538 "esnap_clone": false 00:21:01.538 } 00:21:01.538 } 00:21:01.538 } 00:21:01.538 ]' 00:21:01.538 18:30:19 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:01.538 18:30:20 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:21:01.538 18:30:20 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:01.538 18:30:20 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:01.538 18:30:20 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:01.538 18:30:20 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:21:01.538 18:30:20 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:21:01.538 18:30:20 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:21:01.795 18:30:20 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:21:01.795 18:30:20 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 60a92b63-7bff-4916-a4fd-f0f5f7f6d660 00:21:01.795 18:30:20 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=60a92b63-7bff-4916-a4fd-f0f5f7f6d660 00:21:01.795 18:30:20 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:01.795 18:30:20 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:21:01.795 18:30:20 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:21:01.796 18:30:20 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 60a92b63-7bff-4916-a4fd-f0f5f7f6d660 00:21:02.053 18:30:20 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:02.053 { 00:21:02.053 "name": "60a92b63-7bff-4916-a4fd-f0f5f7f6d660", 00:21:02.053 "aliases": [ 00:21:02.053 "lvs/nvme0n1p0" 00:21:02.053 ], 00:21:02.053 "product_name": "Logical Volume", 00:21:02.053 "block_size": 4096, 00:21:02.053 "num_blocks": 26476544, 00:21:02.053 "uuid": "60a92b63-7bff-4916-a4fd-f0f5f7f6d660", 00:21:02.053 "assigned_rate_limits": { 00:21:02.053 "rw_ios_per_sec": 0, 00:21:02.053 "rw_mbytes_per_sec": 0, 00:21:02.053 "r_mbytes_per_sec": 0, 00:21:02.053 "w_mbytes_per_sec": 0 00:21:02.053 }, 00:21:02.053 "claimed": false, 00:21:02.053 "zoned": false, 00:21:02.053 "supported_io_types": { 00:21:02.053 "read": true, 00:21:02.053 "write": true, 00:21:02.053 "unmap": true, 00:21:02.053 "flush": false, 00:21:02.053 "reset": true, 00:21:02.053 "nvme_admin": false, 00:21:02.053 "nvme_io": false, 00:21:02.053 "nvme_io_md": false, 00:21:02.053 "write_zeroes": true, 00:21:02.053 "zcopy": false, 00:21:02.053 "get_zone_info": false, 00:21:02.053 "zone_management": false, 00:21:02.053 "zone_append": false, 00:21:02.053 "compare": false, 00:21:02.053 "compare_and_write": false, 00:21:02.053 "abort": false, 00:21:02.053 "seek_hole": true, 00:21:02.053 "seek_data": true, 00:21:02.053 "copy": false, 00:21:02.053 "nvme_iov_md": false 00:21:02.053 }, 00:21:02.053 "driver_specific": { 00:21:02.053 "lvol": { 00:21:02.053 "lvol_store_uuid": "94c8dd5f-1fe7-45c0-823e-03198908fa29", 00:21:02.053 "base_bdev": "nvme0n1", 00:21:02.053 "thin_provision": true, 00:21:02.053 "num_allocated_clusters": 0, 00:21:02.053 "snapshot": false, 00:21:02.053 "clone": false, 00:21:02.053 "esnap_clone": false 00:21:02.053 } 00:21:02.053 } 00:21:02.053 } 00:21:02.053 ]' 00:21:02.053 18:30:20 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:02.053 18:30:20 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:21:02.053 18:30:20 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:02.053 18:30:20 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:02.053 18:30:20 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:02.053 18:30:20 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:21:02.053 18:30:20 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:21:02.054 18:30:20 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 60a92b63-7bff-4916-a4fd-f0f5f7f6d660 --l2p_dram_limit 10' 00:21:02.054 18:30:20 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:21:02.054 18:30:20 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:21:02.054 18:30:20 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:21:02.054 18:30:20 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:21:02.054 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:21:02.054 18:30:20 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 60a92b63-7bff-4916-a4fd-f0f5f7f6d660 --l2p_dram_limit 10 -c nvc0n1p0 00:21:02.314 [2024-11-20 18:30:20.697330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.314 [2024-11-20 18:30:20.697370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:02.314 [2024-11-20 18:30:20.697383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:02.314 [2024-11-20 18:30:20.697390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.314 [2024-11-20 18:30:20.697434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.314 [2024-11-20 18:30:20.697441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:02.314 [2024-11-20 18:30:20.697449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:21:02.314 [2024-11-20 18:30:20.697455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.314 [2024-11-20 18:30:20.697474] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:02.314 [2024-11-20 18:30:20.698773] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:02.314 [2024-11-20 18:30:20.698804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.314 [2024-11-20 18:30:20.698812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:02.314 [2024-11-20 18:30:20.698822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.334 ms 00:21:02.314 [2024-11-20 18:30:20.698828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.314 [2024-11-20 18:30:20.698890] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID d1c29b40-f180-4470-a535-460b563a5775 00:21:02.314 [2024-11-20 18:30:20.699855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.314 [2024-11-20 18:30:20.699886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:21:02.314 [2024-11-20 18:30:20.699894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:21:02.314 [2024-11-20 18:30:20.699903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.314 [2024-11-20 18:30:20.704667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.314 [2024-11-20 18:30:20.704697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:02.314 [2024-11-20 18:30:20.704707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.733 ms 00:21:02.314 [2024-11-20 18:30:20.704714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.314 [2024-11-20 18:30:20.704782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.314 [2024-11-20 18:30:20.704791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:02.314 [2024-11-20 18:30:20.704797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:21:02.314 [2024-11-20 18:30:20.704806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.314 [2024-11-20 18:30:20.704859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.314 [2024-11-20 18:30:20.704869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:02.314 [2024-11-20 18:30:20.704875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:02.314 [2024-11-20 18:30:20.704884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.314 [2024-11-20 18:30:20.704900] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:02.314 [2024-11-20 18:30:20.707766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.314 [2024-11-20 18:30:20.707794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:02.314 [2024-11-20 18:30:20.707804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.869 ms 00:21:02.314 [2024-11-20 18:30:20.707810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.314 [2024-11-20 18:30:20.707835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.314 [2024-11-20 18:30:20.707841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:02.314 [2024-11-20 18:30:20.707849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:02.314 [2024-11-20 18:30:20.707854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.314 [2024-11-20 18:30:20.707868] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:21:02.315 [2024-11-20 18:30:20.707972] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:02.315 [2024-11-20 18:30:20.707984] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:02.315 [2024-11-20 18:30:20.707992] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:02.315 [2024-11-20 18:30:20.708002] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:02.315 [2024-11-20 18:30:20.708008] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:02.315 [2024-11-20 18:30:20.708016] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:02.315 [2024-11-20 18:30:20.708022] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:02.315 [2024-11-20 18:30:20.708030] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:02.315 [2024-11-20 18:30:20.708035] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:02.315 [2024-11-20 18:30:20.708042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.315 [2024-11-20 18:30:20.708048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:02.315 [2024-11-20 18:30:20.708055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.175 ms 00:21:02.315 [2024-11-20 18:30:20.708065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.315 [2024-11-20 18:30:20.708146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.315 [2024-11-20 18:30:20.708153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:02.315 [2024-11-20 18:30:20.708160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:21:02.315 [2024-11-20 18:30:20.708165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.315 [2024-11-20 18:30:20.708245] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:02.315 [2024-11-20 18:30:20.708253] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:02.315 [2024-11-20 18:30:20.708261] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:02.315 [2024-11-20 18:30:20.708267] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:02.315 [2024-11-20 18:30:20.708274] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:02.315 [2024-11-20 18:30:20.708279] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:02.315 [2024-11-20 18:30:20.708286] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:02.315 [2024-11-20 18:30:20.708291] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:02.315 [2024-11-20 18:30:20.708298] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:02.315 [2024-11-20 18:30:20.708303] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:02.315 [2024-11-20 18:30:20.708309] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:02.315 [2024-11-20 18:30:20.708314] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:02.315 [2024-11-20 18:30:20.708321] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:02.315 [2024-11-20 18:30:20.708326] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:02.315 [2024-11-20 18:30:20.708332] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:02.315 [2024-11-20 18:30:20.708336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:02.315 [2024-11-20 18:30:20.708344] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:02.315 [2024-11-20 18:30:20.708351] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:02.315 [2024-11-20 18:30:20.708359] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:02.315 [2024-11-20 18:30:20.708364] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:02.315 [2024-11-20 18:30:20.708370] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:02.315 [2024-11-20 18:30:20.708375] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:02.315 [2024-11-20 18:30:20.708382] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:02.315 [2024-11-20 18:30:20.708386] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:02.315 [2024-11-20 18:30:20.708392] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:02.315 [2024-11-20 18:30:20.708397] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:02.315 [2024-11-20 18:30:20.708403] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:02.315 [2024-11-20 18:30:20.708408] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:02.315 [2024-11-20 18:30:20.708414] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:02.315 [2024-11-20 18:30:20.708419] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:02.315 [2024-11-20 18:30:20.708425] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:02.315 [2024-11-20 18:30:20.708430] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:02.315 [2024-11-20 18:30:20.708439] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:02.315 [2024-11-20 18:30:20.708444] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:02.315 [2024-11-20 18:30:20.708450] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:02.315 [2024-11-20 18:30:20.708455] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:02.315 [2024-11-20 18:30:20.708461] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:02.315 [2024-11-20 18:30:20.708466] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:02.315 [2024-11-20 18:30:20.708472] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:02.315 [2024-11-20 18:30:20.708477] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:02.315 [2024-11-20 18:30:20.708483] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:02.315 [2024-11-20 18:30:20.708488] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:02.315 [2024-11-20 18:30:20.708493] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:02.315 [2024-11-20 18:30:20.708498] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:02.315 [2024-11-20 18:30:20.708505] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:02.315 [2024-11-20 18:30:20.708510] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:02.315 [2024-11-20 18:30:20.708518] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:02.315 [2024-11-20 18:30:20.708523] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:02.315 [2024-11-20 18:30:20.708531] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:02.315 [2024-11-20 18:30:20.708537] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:02.315 [2024-11-20 18:30:20.708544] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:02.315 [2024-11-20 18:30:20.708549] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:02.315 [2024-11-20 18:30:20.708555] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:02.315 [2024-11-20 18:30:20.708562] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:02.315 [2024-11-20 18:30:20.708571] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:02.315 [2024-11-20 18:30:20.708578] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:02.315 [2024-11-20 18:30:20.708585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:02.315 [2024-11-20 18:30:20.708590] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:02.315 [2024-11-20 18:30:20.708597] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:02.315 [2024-11-20 18:30:20.708603] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:02.315 [2024-11-20 18:30:20.708609] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:02.315 [2024-11-20 18:30:20.708614] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:02.315 [2024-11-20 18:30:20.708621] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:02.315 [2024-11-20 18:30:20.708626] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:02.315 [2024-11-20 18:30:20.708634] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:02.315 [2024-11-20 18:30:20.708640] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:02.315 [2024-11-20 18:30:20.708646] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:02.315 [2024-11-20 18:30:20.708651] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:02.315 [2024-11-20 18:30:20.708660] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:02.315 [2024-11-20 18:30:20.708666] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:02.315 [2024-11-20 18:30:20.708673] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:02.315 [2024-11-20 18:30:20.708679] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:02.315 [2024-11-20 18:30:20.708686] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:02.315 [2024-11-20 18:30:20.708691] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:02.315 [2024-11-20 18:30:20.708698] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:02.315 [2024-11-20 18:30:20.708703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.315 [2024-11-20 18:30:20.708710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:02.315 [2024-11-20 18:30:20.708716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.512 ms 00:21:02.315 [2024-11-20 18:30:20.708722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.315 [2024-11-20 18:30:20.708762] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:21:02.316 [2024-11-20 18:30:20.708785] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:21:05.612 [2024-11-20 18:30:24.088584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.612 [2024-11-20 18:30:24.088676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:21:05.612 [2024-11-20 18:30:24.088694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3379.806 ms 00:21:05.612 [2024-11-20 18:30:24.088706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.612 [2024-11-20 18:30:24.120300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.612 [2024-11-20 18:30:24.120367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:05.613 [2024-11-20 18:30:24.120383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.332 ms 00:21:05.613 [2024-11-20 18:30:24.120395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.613 [2024-11-20 18:30:24.120539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.613 [2024-11-20 18:30:24.120554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:05.613 [2024-11-20 18:30:24.120563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:21:05.613 [2024-11-20 18:30:24.120577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.613 [2024-11-20 18:30:24.155757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.613 [2024-11-20 18:30:24.155814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:05.613 [2024-11-20 18:30:24.155826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.111 ms 00:21:05.613 [2024-11-20 18:30:24.155837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.613 [2024-11-20 18:30:24.155874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.613 [2024-11-20 18:30:24.155890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:05.613 [2024-11-20 18:30:24.155899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:05.613 [2024-11-20 18:30:24.155910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.613 [2024-11-20 18:30:24.156537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.613 [2024-11-20 18:30:24.156577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:05.613 [2024-11-20 18:30:24.156588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.569 ms 00:21:05.613 [2024-11-20 18:30:24.156597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.613 [2024-11-20 18:30:24.156714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.613 [2024-11-20 18:30:24.156727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:05.613 [2024-11-20 18:30:24.156738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:21:05.613 [2024-11-20 18:30:24.156751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.613 [2024-11-20 18:30:24.174055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.613 [2024-11-20 18:30:24.174123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:05.613 [2024-11-20 18:30:24.174135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.283 ms 00:21:05.613 [2024-11-20 18:30:24.174145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.613 [2024-11-20 18:30:24.187221] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:05.613 [2024-11-20 18:30:24.191008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.613 [2024-11-20 18:30:24.191050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:05.613 [2024-11-20 18:30:24.191063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.772 ms 00:21:05.613 [2024-11-20 18:30:24.191071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.873 [2024-11-20 18:30:24.287379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.873 [2024-11-20 18:30:24.287447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:21:05.873 [2024-11-20 18:30:24.287466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 96.257 ms 00:21:05.873 [2024-11-20 18:30:24.287475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.873 [2024-11-20 18:30:24.287690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.873 [2024-11-20 18:30:24.287707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:05.873 [2024-11-20 18:30:24.287722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.156 ms 00:21:05.873 [2024-11-20 18:30:24.287730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.873 [2024-11-20 18:30:24.313477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.873 [2024-11-20 18:30:24.313526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:21:05.873 [2024-11-20 18:30:24.313543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.688 ms 00:21:05.873 [2024-11-20 18:30:24.313552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.873 [2024-11-20 18:30:24.338336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.873 [2024-11-20 18:30:24.338383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:21:05.873 [2024-11-20 18:30:24.338401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.724 ms 00:21:05.873 [2024-11-20 18:30:24.338410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.873 [2024-11-20 18:30:24.339020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.873 [2024-11-20 18:30:24.339046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:05.873 [2024-11-20 18:30:24.339059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.558 ms 00:21:05.873 [2024-11-20 18:30:24.339067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.873 [2024-11-20 18:30:24.421313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.873 [2024-11-20 18:30:24.421364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:21:05.873 [2024-11-20 18:30:24.421384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.178 ms 00:21:05.873 [2024-11-20 18:30:24.421393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.873 [2024-11-20 18:30:24.448756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.873 [2024-11-20 18:30:24.448811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:21:05.873 [2024-11-20 18:30:24.448848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.259 ms 00:21:05.873 [2024-11-20 18:30:24.448858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.873 [2024-11-20 18:30:24.474347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.873 [2024-11-20 18:30:24.474399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:21:05.873 [2024-11-20 18:30:24.474415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.433 ms 00:21:05.873 [2024-11-20 18:30:24.474422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.134 [2024-11-20 18:30:24.500419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.134 [2024-11-20 18:30:24.500470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:06.135 [2024-11-20 18:30:24.500485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.941 ms 00:21:06.135 [2024-11-20 18:30:24.500495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.135 [2024-11-20 18:30:24.500553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.135 [2024-11-20 18:30:24.500564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:06.135 [2024-11-20 18:30:24.500579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:06.135 [2024-11-20 18:30:24.500587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.135 [2024-11-20 18:30:24.500685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.135 [2024-11-20 18:30:24.500696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:06.135 [2024-11-20 18:30:24.500710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:21:06.135 [2024-11-20 18:30:24.500718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.135 [2024-11-20 18:30:24.502337] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3804.466 ms, result 0 00:21:06.135 { 00:21:06.135 "name": "ftl0", 00:21:06.135 "uuid": "d1c29b40-f180-4470-a535-460b563a5775" 00:21:06.135 } 00:21:06.135 18:30:24 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:21:06.135 18:30:24 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:21:06.135 18:30:24 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:21:06.135 18:30:24 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:21:06.399 [2024-11-20 18:30:24.957297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.399 [2024-11-20 18:30:24.957366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:06.399 [2024-11-20 18:30:24.957382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:06.399 [2024-11-20 18:30:24.957401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.399 [2024-11-20 18:30:24.957427] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:06.399 [2024-11-20 18:30:24.960418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.399 [2024-11-20 18:30:24.960464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:06.399 [2024-11-20 18:30:24.960479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.968 ms 00:21:06.399 [2024-11-20 18:30:24.960488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.399 [2024-11-20 18:30:24.960768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.399 [2024-11-20 18:30:24.960781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:06.399 [2024-11-20 18:30:24.960797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.248 ms 00:21:06.399 [2024-11-20 18:30:24.960805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.399 [2024-11-20 18:30:24.964060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.399 [2024-11-20 18:30:24.964088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:06.399 [2024-11-20 18:30:24.964107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.220 ms 00:21:06.399 [2024-11-20 18:30:24.964115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.399 [2024-11-20 18:30:24.970379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.399 [2024-11-20 18:30:24.970421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:06.399 [2024-11-20 18:30:24.970437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.240 ms 00:21:06.399 [2024-11-20 18:30:24.970446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.399 [2024-11-20 18:30:24.996698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.399 [2024-11-20 18:30:24.996747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:06.399 [2024-11-20 18:30:24.996762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.166 ms 00:21:06.399 [2024-11-20 18:30:24.996770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.399 [2024-11-20 18:30:25.013533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.399 [2024-11-20 18:30:25.013583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:06.399 [2024-11-20 18:30:25.013599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.701 ms 00:21:06.399 [2024-11-20 18:30:25.013608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.399 [2024-11-20 18:30:25.013783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.399 [2024-11-20 18:30:25.013796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:06.399 [2024-11-20 18:30:25.013809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:21:06.399 [2024-11-20 18:30:25.013818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.697 [2024-11-20 18:30:25.039821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.697 [2024-11-20 18:30:25.039871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:06.697 [2024-11-20 18:30:25.039887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.981 ms 00:21:06.697 [2024-11-20 18:30:25.039895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.697 [2024-11-20 18:30:25.065676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.697 [2024-11-20 18:30:25.065726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:06.697 [2024-11-20 18:30:25.065741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.727 ms 00:21:06.697 [2024-11-20 18:30:25.065749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.697 [2024-11-20 18:30:25.090553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.697 [2024-11-20 18:30:25.090600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:06.697 [2024-11-20 18:30:25.090615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.747 ms 00:21:06.697 [2024-11-20 18:30:25.090623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.697 [2024-11-20 18:30:25.115536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.697 [2024-11-20 18:30:25.115586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:06.697 [2024-11-20 18:30:25.115600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.815 ms 00:21:06.697 [2024-11-20 18:30:25.115608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.697 [2024-11-20 18:30:25.115659] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:06.697 [2024-11-20 18:30:25.115674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:06.697 [2024-11-20 18:30:25.115687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:06.697 [2024-11-20 18:30:25.115696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:06.697 [2024-11-20 18:30:25.115707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:06.697 [2024-11-20 18:30:25.115714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:06.697 [2024-11-20 18:30:25.115725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:06.697 [2024-11-20 18:30:25.115733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:06.697 [2024-11-20 18:30:25.115748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:06.697 [2024-11-20 18:30:25.115757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:06.697 [2024-11-20 18:30:25.115768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:06.697 [2024-11-20 18:30:25.115776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:06.697 [2024-11-20 18:30:25.115787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:06.697 [2024-11-20 18:30:25.115795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:06.697 [2024-11-20 18:30:25.115807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:06.697 [2024-11-20 18:30:25.115816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:06.697 [2024-11-20 18:30:25.115827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:06.697 [2024-11-20 18:30:25.115837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:06.697 [2024-11-20 18:30:25.115847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:06.697 [2024-11-20 18:30:25.115855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:06.697 [2024-11-20 18:30:25.115867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:06.697 [2024-11-20 18:30:25.115875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:06.697 [2024-11-20 18:30:25.115885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:06.697 [2024-11-20 18:30:25.115893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:06.697 [2024-11-20 18:30:25.115905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:06.697 [2024-11-20 18:30:25.115913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:06.697 [2024-11-20 18:30:25.115922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:06.697 [2024-11-20 18:30:25.115932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:06.697 [2024-11-20 18:30:25.115941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:06.697 [2024-11-20 18:30:25.115949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:06.697 [2024-11-20 18:30:25.115961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:06.697 [2024-11-20 18:30:25.115969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:06.698 [2024-11-20 18:30:25.115979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:06.698 [2024-11-20 18:30:25.115986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:06.698 [2024-11-20 18:30:25.115997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:06.698 [2024-11-20 18:30:25.116005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:06.698 [2024-11-20 18:30:25.116016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:06.698 [2024-11-20 18:30:25.116023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:06.698 [2024-11-20 18:30:25.116032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:06.698 [2024-11-20 18:30:25.116040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:06.698 [2024-11-20 18:30:25.116053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:06.698 [2024-11-20 18:30:25.116061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:06.698 [2024-11-20 18:30:25.116070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:06.698 [2024-11-20 18:30:25.116078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:06.698 [2024-11-20 18:30:25.116089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:06.698 [2024-11-20 18:30:25.116112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:06.698 [2024-11-20 18:30:25.116124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:06.698 [2024-11-20 18:30:25.116132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:06.698 [2024-11-20 18:30:25.116141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:06.698 [2024-11-20 18:30:25.116149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:06.698 [2024-11-20 18:30:25.116160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:06.698 [2024-11-20 18:30:25.116169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:06.698 [2024-11-20 18:30:25.116180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:06.698 [2024-11-20 18:30:25.116187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:06.698 [2024-11-20 18:30:25.116197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:06.698 [2024-11-20 18:30:25.116205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:06.698 [2024-11-20 18:30:25.116218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:06.698 [2024-11-20 18:30:25.116228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:06.698 [2024-11-20 18:30:25.116248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:06.698 [2024-11-20 18:30:25.116255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:06.698 [2024-11-20 18:30:25.116264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:06.698 [2024-11-20 18:30:25.116275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:06.698 [2024-11-20 18:30:25.116285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:06.698 [2024-11-20 18:30:25.116293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:06.698 [2024-11-20 18:30:25.116303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:06.698 [2024-11-20 18:30:25.116311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:06.698 [2024-11-20 18:30:25.116321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:06.698 [2024-11-20 18:30:25.116330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:06.698 [2024-11-20 18:30:25.116341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:06.698 [2024-11-20 18:30:25.116349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:06.698 [2024-11-20 18:30:25.116359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:06.698 [2024-11-20 18:30:25.116366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:06.698 [2024-11-20 18:30:25.116379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:06.698 [2024-11-20 18:30:25.116388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:06.698 [2024-11-20 18:30:25.116400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:06.698 [2024-11-20 18:30:25.116408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:06.698 [2024-11-20 18:30:25.116417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:06.698 [2024-11-20 18:30:25.116425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:06.698 [2024-11-20 18:30:25.116436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:06.698 [2024-11-20 18:30:25.116443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:06.698 [2024-11-20 18:30:25.116452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:06.698 [2024-11-20 18:30:25.116461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:06.698 [2024-11-20 18:30:25.116472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:06.698 [2024-11-20 18:30:25.116479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:06.698 [2024-11-20 18:30:25.116488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:06.698 [2024-11-20 18:30:25.116496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:06.698 [2024-11-20 18:30:25.116505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:06.698 [2024-11-20 18:30:25.116513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:06.698 [2024-11-20 18:30:25.116525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:06.698 [2024-11-20 18:30:25.116534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:06.698 [2024-11-20 18:30:25.116544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:06.698 [2024-11-20 18:30:25.116551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:06.698 [2024-11-20 18:30:25.116561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:06.698 [2024-11-20 18:30:25.116569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:06.698 [2024-11-20 18:30:25.116579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:06.698 [2024-11-20 18:30:25.116589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:06.698 [2024-11-20 18:30:25.116600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:06.698 [2024-11-20 18:30:25.116608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:06.698 [2024-11-20 18:30:25.116619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:06.698 [2024-11-20 18:30:25.116626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:06.698 [2024-11-20 18:30:25.116636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:06.698 [2024-11-20 18:30:25.116652] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:06.698 [2024-11-20 18:30:25.116665] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d1c29b40-f180-4470-a535-460b563a5775 00:21:06.698 [2024-11-20 18:30:25.116675] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:06.698 [2024-11-20 18:30:25.116686] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:06.698 [2024-11-20 18:30:25.116694] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:06.698 [2024-11-20 18:30:25.116708] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:06.698 [2024-11-20 18:30:25.116716] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:06.698 [2024-11-20 18:30:25.116725] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:06.698 [2024-11-20 18:30:25.116733] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:06.698 [2024-11-20 18:30:25.116743] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:06.698 [2024-11-20 18:30:25.116750] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:06.698 [2024-11-20 18:30:25.116759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.698 [2024-11-20 18:30:25.116767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:06.698 [2024-11-20 18:30:25.116779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.103 ms 00:21:06.698 [2024-11-20 18:30:25.116787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.698 [2024-11-20 18:30:25.130815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.698 [2024-11-20 18:30:25.130859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:06.698 [2024-11-20 18:30:25.130872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.963 ms 00:21:06.698 [2024-11-20 18:30:25.130881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.698 [2024-11-20 18:30:25.131335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.699 [2024-11-20 18:30:25.131360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:06.699 [2024-11-20 18:30:25.131372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.409 ms 00:21:06.699 [2024-11-20 18:30:25.131383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.699 [2024-11-20 18:30:25.177741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:06.699 [2024-11-20 18:30:25.177789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:06.699 [2024-11-20 18:30:25.177804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:06.699 [2024-11-20 18:30:25.177813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.699 [2024-11-20 18:30:25.177882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:06.699 [2024-11-20 18:30:25.177891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:06.699 [2024-11-20 18:30:25.177902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:06.699 [2024-11-20 18:30:25.177913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.699 [2024-11-20 18:30:25.178001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:06.699 [2024-11-20 18:30:25.178013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:06.699 [2024-11-20 18:30:25.178024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:06.699 [2024-11-20 18:30:25.178032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.699 [2024-11-20 18:30:25.178055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:06.699 [2024-11-20 18:30:25.178064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:06.699 [2024-11-20 18:30:25.178074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:06.699 [2024-11-20 18:30:25.178082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.699 [2024-11-20 18:30:25.261892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:06.699 [2024-11-20 18:30:25.261972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:06.699 [2024-11-20 18:30:25.261988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:06.699 [2024-11-20 18:30:25.261997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.989 [2024-11-20 18:30:25.331901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:06.989 [2024-11-20 18:30:25.331961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:06.989 [2024-11-20 18:30:25.331977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:06.989 [2024-11-20 18:30:25.331991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.990 [2024-11-20 18:30:25.332130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:06.990 [2024-11-20 18:30:25.332141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:06.990 [2024-11-20 18:30:25.332153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:06.990 [2024-11-20 18:30:25.332161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.990 [2024-11-20 18:30:25.332217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:06.990 [2024-11-20 18:30:25.332229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:06.990 [2024-11-20 18:30:25.332241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:06.990 [2024-11-20 18:30:25.332250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.990 [2024-11-20 18:30:25.332361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:06.990 [2024-11-20 18:30:25.332371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:06.990 [2024-11-20 18:30:25.332383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:06.990 [2024-11-20 18:30:25.332391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.990 [2024-11-20 18:30:25.332429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:06.990 [2024-11-20 18:30:25.332441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:06.990 [2024-11-20 18:30:25.332452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:06.990 [2024-11-20 18:30:25.332460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.990 [2024-11-20 18:30:25.332506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:06.990 [2024-11-20 18:30:25.332520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:06.990 [2024-11-20 18:30:25.332530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:06.990 [2024-11-20 18:30:25.332538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.990 [2024-11-20 18:30:25.332593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:06.990 [2024-11-20 18:30:25.332605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:06.990 [2024-11-20 18:30:25.332617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:06.990 [2024-11-20 18:30:25.332625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.990 [2024-11-20 18:30:25.332777] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 375.436 ms, result 0 00:21:06.990 true 00:21:06.990 18:30:25 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 77063 00:21:06.990 18:30:25 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 77063 ']' 00:21:06.990 18:30:25 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 77063 00:21:06.990 18:30:25 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:21:06.990 18:30:25 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:06.990 18:30:25 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77063 00:21:06.990 18:30:25 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:06.990 killing process with pid 77063 00:21:06.990 18:30:25 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:06.990 18:30:25 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77063' 00:21:06.990 18:30:25 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 77063 00:21:06.990 18:30:25 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 77063 00:21:12.280 18:30:30 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:21:16.474 262144+0 records in 00:21:16.474 262144+0 records out 00:21:16.474 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.01995 s, 267 MB/s 00:21:16.474 18:30:34 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:21:17.410 18:30:35 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:17.410 [2024-11-20 18:30:35.943291] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:21:17.410 [2024-11-20 18:30:35.943424] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77290 ] 00:21:17.670 [2024-11-20 18:30:36.105397] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:17.670 [2024-11-20 18:30:36.247930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:17.929 [2024-11-20 18:30:36.463560] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:17.929 [2024-11-20 18:30:36.463605] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:18.191 [2024-11-20 18:30:36.611215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.191 [2024-11-20 18:30:36.611256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:18.191 [2024-11-20 18:30:36.611273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:18.191 [2024-11-20 18:30:36.611281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.191 [2024-11-20 18:30:36.611327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.191 [2024-11-20 18:30:36.611337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:18.191 [2024-11-20 18:30:36.611347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:21:18.191 [2024-11-20 18:30:36.611354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.191 [2024-11-20 18:30:36.611371] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:18.191 [2024-11-20 18:30:36.612028] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:18.191 [2024-11-20 18:30:36.612045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.191 [2024-11-20 18:30:36.612053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:18.191 [2024-11-20 18:30:36.612061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.679 ms 00:21:18.191 [2024-11-20 18:30:36.612068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.191 [2024-11-20 18:30:36.613185] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:18.191 [2024-11-20 18:30:36.625963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.191 [2024-11-20 18:30:36.625993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:18.191 [2024-11-20 18:30:36.626004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.779 ms 00:21:18.191 [2024-11-20 18:30:36.626012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.191 [2024-11-20 18:30:36.626069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.191 [2024-11-20 18:30:36.626078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:18.191 [2024-11-20 18:30:36.626086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:21:18.191 [2024-11-20 18:30:36.626104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.191 [2024-11-20 18:30:36.631501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.191 [2024-11-20 18:30:36.631528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:18.191 [2024-11-20 18:30:36.631537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.344 ms 00:21:18.191 [2024-11-20 18:30:36.631544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.191 [2024-11-20 18:30:36.631622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.191 [2024-11-20 18:30:36.631631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:18.191 [2024-11-20 18:30:36.631640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:21:18.191 [2024-11-20 18:30:36.631647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.191 [2024-11-20 18:30:36.631694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.191 [2024-11-20 18:30:36.631704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:18.191 [2024-11-20 18:30:36.631712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:18.191 [2024-11-20 18:30:36.631719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.191 [2024-11-20 18:30:36.631739] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:18.191 [2024-11-20 18:30:36.635174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.191 [2024-11-20 18:30:36.635197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:18.191 [2024-11-20 18:30:36.635207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.440 ms 00:21:18.191 [2024-11-20 18:30:36.635216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.191 [2024-11-20 18:30:36.635244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.191 [2024-11-20 18:30:36.635251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:18.191 [2024-11-20 18:30:36.635259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:18.191 [2024-11-20 18:30:36.635266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.191 [2024-11-20 18:30:36.635285] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:18.191 [2024-11-20 18:30:36.635303] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:18.191 [2024-11-20 18:30:36.635337] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:18.191 [2024-11-20 18:30:36.635355] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:18.191 [2024-11-20 18:30:36.635472] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:18.191 [2024-11-20 18:30:36.635483] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:18.191 [2024-11-20 18:30:36.635493] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:18.191 [2024-11-20 18:30:36.635503] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:18.191 [2024-11-20 18:30:36.635512] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:18.191 [2024-11-20 18:30:36.635521] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:18.191 [2024-11-20 18:30:36.635528] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:18.191 [2024-11-20 18:30:36.635535] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:18.191 [2024-11-20 18:30:36.635542] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:18.191 [2024-11-20 18:30:36.635552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.192 [2024-11-20 18:30:36.635559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:18.192 [2024-11-20 18:30:36.635568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.269 ms 00:21:18.192 [2024-11-20 18:30:36.635574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.192 [2024-11-20 18:30:36.635655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.192 [2024-11-20 18:30:36.635663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:18.192 [2024-11-20 18:30:36.635672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:21:18.192 [2024-11-20 18:30:36.635679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.192 [2024-11-20 18:30:36.635811] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:18.192 [2024-11-20 18:30:36.635829] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:18.192 [2024-11-20 18:30:36.635841] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:18.192 [2024-11-20 18:30:36.635851] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:18.192 [2024-11-20 18:30:36.635863] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:18.192 [2024-11-20 18:30:36.635869] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:18.192 [2024-11-20 18:30:36.635877] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:18.192 [2024-11-20 18:30:36.635885] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:18.192 [2024-11-20 18:30:36.635892] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:18.192 [2024-11-20 18:30:36.635899] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:18.192 [2024-11-20 18:30:36.635906] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:18.192 [2024-11-20 18:30:36.635913] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:18.192 [2024-11-20 18:30:36.635920] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:18.192 [2024-11-20 18:30:36.635930] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:18.192 [2024-11-20 18:30:36.635938] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:18.192 [2024-11-20 18:30:36.635949] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:18.192 [2024-11-20 18:30:36.635956] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:18.192 [2024-11-20 18:30:36.635962] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:18.192 [2024-11-20 18:30:36.635969] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:18.192 [2024-11-20 18:30:36.635976] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:18.192 [2024-11-20 18:30:36.635982] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:18.192 [2024-11-20 18:30:36.635989] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:18.192 [2024-11-20 18:30:36.635995] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:18.192 [2024-11-20 18:30:36.636001] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:18.192 [2024-11-20 18:30:36.636007] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:18.192 [2024-11-20 18:30:36.636014] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:18.192 [2024-11-20 18:30:36.636020] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:18.192 [2024-11-20 18:30:36.636026] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:18.192 [2024-11-20 18:30:36.636033] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:18.192 [2024-11-20 18:30:36.636043] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:18.192 [2024-11-20 18:30:36.636050] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:18.192 [2024-11-20 18:30:36.636059] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:18.192 [2024-11-20 18:30:36.636066] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:18.192 [2024-11-20 18:30:36.636076] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:18.192 [2024-11-20 18:30:36.636082] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:18.192 [2024-11-20 18:30:36.636089] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:18.192 [2024-11-20 18:30:36.636112] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:18.192 [2024-11-20 18:30:36.636119] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:18.192 [2024-11-20 18:30:36.636125] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:18.192 [2024-11-20 18:30:36.636132] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:18.192 [2024-11-20 18:30:36.636140] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:18.192 [2024-11-20 18:30:36.636146] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:18.192 [2024-11-20 18:30:36.636153] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:18.192 [2024-11-20 18:30:36.636160] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:18.192 [2024-11-20 18:30:36.636170] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:18.192 [2024-11-20 18:30:36.636177] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:18.192 [2024-11-20 18:30:36.636185] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:18.192 [2024-11-20 18:30:36.636193] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:18.192 [2024-11-20 18:30:36.636199] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:18.192 [2024-11-20 18:30:36.636206] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:18.192 [2024-11-20 18:30:36.636213] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:18.192 [2024-11-20 18:30:36.636219] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:18.192 [2024-11-20 18:30:36.636226] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:18.192 [2024-11-20 18:30:36.636234] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:18.192 [2024-11-20 18:30:36.636243] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:18.192 [2024-11-20 18:30:36.636251] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:18.192 [2024-11-20 18:30:36.636258] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:18.192 [2024-11-20 18:30:36.636265] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:18.192 [2024-11-20 18:30:36.636272] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:18.192 [2024-11-20 18:30:36.636279] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:18.192 [2024-11-20 18:30:36.636286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:18.192 [2024-11-20 18:30:36.636293] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:18.192 [2024-11-20 18:30:36.636300] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:18.192 [2024-11-20 18:30:36.636307] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:18.192 [2024-11-20 18:30:36.636314] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:18.192 [2024-11-20 18:30:36.636321] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:18.192 [2024-11-20 18:30:36.636328] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:18.192 [2024-11-20 18:30:36.636336] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:18.192 [2024-11-20 18:30:36.636343] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:18.192 [2024-11-20 18:30:36.636350] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:18.192 [2024-11-20 18:30:36.636361] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:18.192 [2024-11-20 18:30:36.636369] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:18.192 [2024-11-20 18:30:36.636376] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:18.192 [2024-11-20 18:30:36.636383] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:18.192 [2024-11-20 18:30:36.636391] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:18.192 [2024-11-20 18:30:36.636398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.192 [2024-11-20 18:30:36.636406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:18.192 [2024-11-20 18:30:36.636413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.667 ms 00:21:18.192 [2024-11-20 18:30:36.636420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.192 [2024-11-20 18:30:36.663036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.192 [2024-11-20 18:30:36.663066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:18.192 [2024-11-20 18:30:36.663076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.575 ms 00:21:18.192 [2024-11-20 18:30:36.663084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.192 [2024-11-20 18:30:36.663179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.192 [2024-11-20 18:30:36.663188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:18.192 [2024-11-20 18:30:36.663196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:21:18.192 [2024-11-20 18:30:36.663203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.192 [2024-11-20 18:30:36.709001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.192 [2024-11-20 18:30:36.709037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:18.192 [2024-11-20 18:30:36.709050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.751 ms 00:21:18.192 [2024-11-20 18:30:36.709058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.192 [2024-11-20 18:30:36.709107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.192 [2024-11-20 18:30:36.709117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:18.193 [2024-11-20 18:30:36.709126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:18.193 [2024-11-20 18:30:36.709137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.193 [2024-11-20 18:30:36.709553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.193 [2024-11-20 18:30:36.709581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:18.193 [2024-11-20 18:30:36.709590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.350 ms 00:21:18.193 [2024-11-20 18:30:36.709598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.193 [2024-11-20 18:30:36.709733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.193 [2024-11-20 18:30:36.709744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:18.193 [2024-11-20 18:30:36.709752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:21:18.193 [2024-11-20 18:30:36.709764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.193 [2024-11-20 18:30:36.723347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.193 [2024-11-20 18:30:36.723378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:18.193 [2024-11-20 18:30:36.723391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.566 ms 00:21:18.193 [2024-11-20 18:30:36.723399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.193 [2024-11-20 18:30:36.736600] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:21:18.193 [2024-11-20 18:30:36.736638] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:18.193 [2024-11-20 18:30:36.736651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.193 [2024-11-20 18:30:36.736658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:18.193 [2024-11-20 18:30:36.736667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.163 ms 00:21:18.193 [2024-11-20 18:30:36.736675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.193 [2024-11-20 18:30:36.761379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.193 [2024-11-20 18:30:36.761417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:18.193 [2024-11-20 18:30:36.761435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.661 ms 00:21:18.193 [2024-11-20 18:30:36.761443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.193 [2024-11-20 18:30:36.773818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.193 [2024-11-20 18:30:36.773863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:18.193 [2024-11-20 18:30:36.773874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.325 ms 00:21:18.193 [2024-11-20 18:30:36.773882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.193 [2024-11-20 18:30:36.786555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.193 [2024-11-20 18:30:36.786594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:18.193 [2024-11-20 18:30:36.786606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.628 ms 00:21:18.193 [2024-11-20 18:30:36.786613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.193 [2024-11-20 18:30:36.787282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.193 [2024-11-20 18:30:36.787306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:18.193 [2024-11-20 18:30:36.787317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.558 ms 00:21:18.193 [2024-11-20 18:30:36.787325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.455 [2024-11-20 18:30:36.853369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.455 [2024-11-20 18:30:36.853432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:18.455 [2024-11-20 18:30:36.853450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.022 ms 00:21:18.455 [2024-11-20 18:30:36.853467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.455 [2024-11-20 18:30:36.866469] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:18.455 [2024-11-20 18:30:36.869770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.455 [2024-11-20 18:30:36.869811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:18.455 [2024-11-20 18:30:36.869824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.574 ms 00:21:18.455 [2024-11-20 18:30:36.869833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.455 [2024-11-20 18:30:36.869930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.455 [2024-11-20 18:30:36.869942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:18.455 [2024-11-20 18:30:36.869953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:21:18.455 [2024-11-20 18:30:36.869961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.455 [2024-11-20 18:30:36.870038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.455 [2024-11-20 18:30:36.870049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:18.455 [2024-11-20 18:30:36.870060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:21:18.455 [2024-11-20 18:30:36.870068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.455 [2024-11-20 18:30:36.870089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.455 [2024-11-20 18:30:36.870118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:18.455 [2024-11-20 18:30:36.870128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:18.455 [2024-11-20 18:30:36.870136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.455 [2024-11-20 18:30:36.870172] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:18.455 [2024-11-20 18:30:36.870185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.455 [2024-11-20 18:30:36.870194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:18.455 [2024-11-20 18:30:36.870203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:21:18.455 [2024-11-20 18:30:36.870211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.455 [2024-11-20 18:30:36.896660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.455 [2024-11-20 18:30:36.896706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:18.455 [2024-11-20 18:30:36.896720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.429 ms 00:21:18.455 [2024-11-20 18:30:36.896735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.455 [2024-11-20 18:30:36.896830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.455 [2024-11-20 18:30:36.896841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:18.455 [2024-11-20 18:30:36.896850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:21:18.455 [2024-11-20 18:30:36.896870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.455 [2024-11-20 18:30:36.898317] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 286.559 ms, result 0 00:21:19.399  [2024-11-20T18:30:38.972Z] Copying: 22/1024 [MB] (22 MBps) [2024-11-20T18:30:39.914Z] Copying: 40/1024 [MB] (17 MBps) [2024-11-20T18:30:41.294Z] Copying: 58/1024 [MB] (18 MBps) [2024-11-20T18:30:42.232Z] Copying: 70/1024 [MB] (12 MBps) [2024-11-20T18:30:43.172Z] Copying: 90/1024 [MB] (20 MBps) [2024-11-20T18:30:44.115Z] Copying: 110/1024 [MB] (19 MBps) [2024-11-20T18:30:45.059Z] Copying: 130/1024 [MB] (20 MBps) [2024-11-20T18:30:46.003Z] Copying: 150/1024 [MB] (20 MBps) [2024-11-20T18:30:46.946Z] Copying: 168/1024 [MB] (17 MBps) [2024-11-20T18:30:48.329Z] Copying: 178/1024 [MB] (10 MBps) [2024-11-20T18:30:49.274Z] Copying: 188/1024 [MB] (10 MBps) [2024-11-20T18:30:50.217Z] Copying: 198/1024 [MB] (10 MBps) [2024-11-20T18:30:51.155Z] Copying: 213664/1048576 [kB] (10232 kBps) [2024-11-20T18:30:52.099Z] Copying: 253/1024 [MB] (44 MBps) [2024-11-20T18:30:53.044Z] Copying: 273/1024 [MB] (19 MBps) [2024-11-20T18:30:53.989Z] Copying: 285/1024 [MB] (12 MBps) [2024-11-20T18:30:54.960Z] Copying: 302/1024 [MB] (16 MBps) [2024-11-20T18:30:56.346Z] Copying: 320/1024 [MB] (17 MBps) [2024-11-20T18:30:56.918Z] Copying: 330/1024 [MB] (10 MBps) [2024-11-20T18:30:58.305Z] Copying: 340/1024 [MB] (10 MBps) [2024-11-20T18:30:59.247Z] Copying: 355/1024 [MB] (14 MBps) [2024-11-20T18:31:00.190Z] Copying: 365/1024 [MB] (10 MBps) [2024-11-20T18:31:01.218Z] Copying: 375/1024 [MB] (10 MBps) [2024-11-20T18:31:02.161Z] Copying: 385/1024 [MB] (10 MBps) [2024-11-20T18:31:03.106Z] Copying: 402/1024 [MB] (16 MBps) [2024-11-20T18:31:04.051Z] Copying: 412/1024 [MB] (10 MBps) [2024-11-20T18:31:04.994Z] Copying: 423/1024 [MB] (10 MBps) [2024-11-20T18:31:05.936Z] Copying: 434/1024 [MB] (10 MBps) [2024-11-20T18:31:07.319Z] Copying: 444/1024 [MB] (10 MBps) [2024-11-20T18:31:08.260Z] Copying: 454/1024 [MB] (10 MBps) [2024-11-20T18:31:09.203Z] Copying: 465/1024 [MB] (10 MBps) [2024-11-20T18:31:10.145Z] Copying: 475/1024 [MB] (10 MBps) [2024-11-20T18:31:11.089Z] Copying: 486/1024 [MB] (10 MBps) [2024-11-20T18:31:12.035Z] Copying: 499/1024 [MB] (13 MBps) [2024-11-20T18:31:12.980Z] Copying: 515/1024 [MB] (15 MBps) [2024-11-20T18:31:13.925Z] Copying: 525/1024 [MB] (10 MBps) [2024-11-20T18:31:15.311Z] Copying: 535/1024 [MB] (10 MBps) [2024-11-20T18:31:16.254Z] Copying: 558984/1048576 [kB] (10212 kBps) [2024-11-20T18:31:17.197Z] Copying: 562/1024 [MB] (16 MBps) [2024-11-20T18:31:18.139Z] Copying: 572/1024 [MB] (10 MBps) [2024-11-20T18:31:19.080Z] Copying: 585/1024 [MB] (13 MBps) [2024-11-20T18:31:20.023Z] Copying: 600/1024 [MB] (15 MBps) [2024-11-20T18:31:20.981Z] Copying: 612/1024 [MB] (11 MBps) [2024-11-20T18:31:21.925Z] Copying: 622/1024 [MB] (10 MBps) [2024-11-20T18:31:23.310Z] Copying: 635/1024 [MB] (12 MBps) [2024-11-20T18:31:24.257Z] Copying: 648/1024 [MB] (12 MBps) [2024-11-20T18:31:25.201Z] Copying: 661/1024 [MB] (13 MBps) [2024-11-20T18:31:26.143Z] Copying: 674/1024 [MB] (13 MBps) [2024-11-20T18:31:27.085Z] Copying: 686/1024 [MB] (11 MBps) [2024-11-20T18:31:28.030Z] Copying: 696/1024 [MB] (10 MBps) [2024-11-20T18:31:29.068Z] Copying: 708/1024 [MB] (11 MBps) [2024-11-20T18:31:30.024Z] Copying: 720/1024 [MB] (11 MBps) [2024-11-20T18:31:30.967Z] Copying: 734/1024 [MB] (14 MBps) [2024-11-20T18:31:31.912Z] Copying: 746/1024 [MB] (11 MBps) [2024-11-20T18:31:33.303Z] Copying: 756/1024 [MB] (10 MBps) [2024-11-20T18:31:34.249Z] Copying: 766/1024 [MB] (10 MBps) [2024-11-20T18:31:35.194Z] Copying: 776/1024 [MB] (10 MBps) [2024-11-20T18:31:36.138Z] Copying: 787/1024 [MB] (10 MBps) [2024-11-20T18:31:37.079Z] Copying: 797/1024 [MB] (10 MBps) [2024-11-20T18:31:38.020Z] Copying: 808/1024 [MB] (10 MBps) [2024-11-20T18:31:38.962Z] Copying: 818/1024 [MB] (10 MBps) [2024-11-20T18:31:40.348Z] Copying: 829/1024 [MB] (10 MBps) [2024-11-20T18:31:40.921Z] Copying: 859712/1048576 [kB] (10072 kBps) [2024-11-20T18:31:42.306Z] Copying: 869848/1048576 [kB] (10136 kBps) [2024-11-20T18:31:43.250Z] Copying: 862/1024 [MB] (13 MBps) [2024-11-20T18:31:44.187Z] Copying: 872/1024 [MB] (10 MBps) [2024-11-20T18:31:45.144Z] Copying: 887/1024 [MB] (14 MBps) [2024-11-20T18:31:46.087Z] Copying: 913/1024 [MB] (25 MBps) [2024-11-20T18:31:47.032Z] Copying: 923/1024 [MB] (10 MBps) [2024-11-20T18:31:47.976Z] Copying: 933/1024 [MB] (10 MBps) [2024-11-20T18:31:48.919Z] Copying: 943/1024 [MB] (10 MBps) [2024-11-20T18:31:50.305Z] Copying: 961/1024 [MB] (17 MBps) [2024-11-20T18:31:51.249Z] Copying: 978/1024 [MB] (16 MBps) [2024-11-20T18:31:52.193Z] Copying: 994/1024 [MB] (16 MBps) [2024-11-20T18:31:53.137Z] Copying: 1005/1024 [MB] (11 MBps) [2024-11-20T18:31:53.399Z] Copying: 1019/1024 [MB] (13 MBps) [2024-11-20T18:31:53.399Z] Copying: 1024/1024 [MB] (average 13 MBps)[2024-11-20 18:31:53.391292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.770 [2024-11-20 18:31:53.391349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:34.770 [2024-11-20 18:31:53.391365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:34.770 [2024-11-20 18:31:53.391374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.770 [2024-11-20 18:31:53.391397] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:34.770 [2024-11-20 18:31:53.394534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.770 [2024-11-20 18:31:53.394577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:34.770 [2024-11-20 18:31:53.394590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.119 ms 00:22:34.770 [2024-11-20 18:31:53.394608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.033 [2024-11-20 18:31:53.398227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.033 [2024-11-20 18:31:53.398271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:35.033 [2024-11-20 18:31:53.398283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.587 ms 00:22:35.033 [2024-11-20 18:31:53.398291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.033 [2024-11-20 18:31:53.417689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.033 [2024-11-20 18:31:53.417737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:35.033 [2024-11-20 18:31:53.417750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.379 ms 00:22:35.033 [2024-11-20 18:31:53.417758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.033 [2024-11-20 18:31:53.424120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.033 [2024-11-20 18:31:53.424165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:35.033 [2024-11-20 18:31:53.424177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.299 ms 00:22:35.033 [2024-11-20 18:31:53.424185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.033 [2024-11-20 18:31:53.451997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.033 [2024-11-20 18:31:53.452044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:35.033 [2024-11-20 18:31:53.452057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.730 ms 00:22:35.033 [2024-11-20 18:31:53.452066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.033 [2024-11-20 18:31:53.468824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.033 [2024-11-20 18:31:53.468869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:35.033 [2024-11-20 18:31:53.468882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.693 ms 00:22:35.033 [2024-11-20 18:31:53.468890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.033 [2024-11-20 18:31:53.469040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.034 [2024-11-20 18:31:53.469060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:35.034 [2024-11-20 18:31:53.469121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:22:35.034 [2024-11-20 18:31:53.469134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.034 [2024-11-20 18:31:53.496819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.034 [2024-11-20 18:31:53.496867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:35.034 [2024-11-20 18:31:53.496879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.662 ms 00:22:35.034 [2024-11-20 18:31:53.496886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.034 [2024-11-20 18:31:53.523222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.034 [2024-11-20 18:31:53.523266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:35.034 [2024-11-20 18:31:53.523291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.284 ms 00:22:35.034 [2024-11-20 18:31:53.523298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.034 [2024-11-20 18:31:53.549577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.034 [2024-11-20 18:31:53.549636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:35.034 [2024-11-20 18:31:53.549648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.226 ms 00:22:35.034 [2024-11-20 18:31:53.549655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.034 [2024-11-20 18:31:53.576079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.034 [2024-11-20 18:31:53.576145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:35.034 [2024-11-20 18:31:53.576158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.327 ms 00:22:35.034 [2024-11-20 18:31:53.576167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.034 [2024-11-20 18:31:53.576220] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:35.034 [2024-11-20 18:31:53.576236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:35.034 [2024-11-20 18:31:53.576255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:35.034 [2024-11-20 18:31:53.576263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:35.034 [2024-11-20 18:31:53.576271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:35.034 [2024-11-20 18:31:53.576280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:35.034 [2024-11-20 18:31:53.576288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:35.034 [2024-11-20 18:31:53.576296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:35.034 [2024-11-20 18:31:53.576304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:35.034 [2024-11-20 18:31:53.576312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:35.034 [2024-11-20 18:31:53.576321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:35.034 [2024-11-20 18:31:53.576329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:35.034 [2024-11-20 18:31:53.576336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:35.034 [2024-11-20 18:31:53.576344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:35.034 [2024-11-20 18:31:53.576352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:35.034 [2024-11-20 18:31:53.576359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:35.034 [2024-11-20 18:31:53.576367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:35.034 [2024-11-20 18:31:53.576374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:35.034 [2024-11-20 18:31:53.576382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:35.034 [2024-11-20 18:31:53.576389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:35.034 [2024-11-20 18:31:53.576397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:35.034 [2024-11-20 18:31:53.576404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:35.034 [2024-11-20 18:31:53.576412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:35.034 [2024-11-20 18:31:53.576419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:35.034 [2024-11-20 18:31:53.576426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:35.034 [2024-11-20 18:31:53.576434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:35.034 [2024-11-20 18:31:53.576442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:35.034 [2024-11-20 18:31:53.576450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:35.034 [2024-11-20 18:31:53.576458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:35.034 [2024-11-20 18:31:53.576465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:35.034 [2024-11-20 18:31:53.576474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:35.034 [2024-11-20 18:31:53.576483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:35.034 [2024-11-20 18:31:53.576491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:35.034 [2024-11-20 18:31:53.576499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:35.034 [2024-11-20 18:31:53.576507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:35.034 [2024-11-20 18:31:53.576514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:35.034 [2024-11-20 18:31:53.576522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:35.034 [2024-11-20 18:31:53.576529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:35.034 [2024-11-20 18:31:53.576537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:35.034 [2024-11-20 18:31:53.576544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:35.034 [2024-11-20 18:31:53.576552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:35.034 [2024-11-20 18:31:53.576560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:35.034 [2024-11-20 18:31:53.576567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:35.034 [2024-11-20 18:31:53.576575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:35.034 [2024-11-20 18:31:53.576582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:35.034 [2024-11-20 18:31:53.576590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:35.034 [2024-11-20 18:31:53.576597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:35.034 [2024-11-20 18:31:53.576605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:35.034 [2024-11-20 18:31:53.576612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:35.034 [2024-11-20 18:31:53.576621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:35.034 [2024-11-20 18:31:53.576628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:35.034 [2024-11-20 18:31:53.576638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:35.034 [2024-11-20 18:31:53.576645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:35.034 [2024-11-20 18:31:53.576653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:35.034 [2024-11-20 18:31:53.576660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:35.034 [2024-11-20 18:31:53.576668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:35.034 [2024-11-20 18:31:53.576676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:35.034 [2024-11-20 18:31:53.576683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:35.034 [2024-11-20 18:31:53.576690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:35.034 [2024-11-20 18:31:53.576699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:35.034 [2024-11-20 18:31:53.576706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:35.034 [2024-11-20 18:31:53.576714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:35.034 [2024-11-20 18:31:53.576721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:35.034 [2024-11-20 18:31:53.576729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:35.034 [2024-11-20 18:31:53.576737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:35.034 [2024-11-20 18:31:53.576744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:35.034 [2024-11-20 18:31:53.576752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:35.034 [2024-11-20 18:31:53.576760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:35.034 [2024-11-20 18:31:53.576768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:35.034 [2024-11-20 18:31:53.576776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:35.034 [2024-11-20 18:31:53.576783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:35.034 [2024-11-20 18:31:53.576791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:35.034 [2024-11-20 18:31:53.576799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:35.034 [2024-11-20 18:31:53.576806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:35.034 [2024-11-20 18:31:53.576814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:35.035 [2024-11-20 18:31:53.576822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:35.035 [2024-11-20 18:31:53.576830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:35.035 [2024-11-20 18:31:53.576837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:35.035 [2024-11-20 18:31:53.576845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:35.035 [2024-11-20 18:31:53.576852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:35.035 [2024-11-20 18:31:53.576859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:35.035 [2024-11-20 18:31:53.576867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:35.035 [2024-11-20 18:31:53.576874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:35.035 [2024-11-20 18:31:53.576883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:35.035 [2024-11-20 18:31:53.576891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:35.035 [2024-11-20 18:31:53.576900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:35.035 [2024-11-20 18:31:53.576908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:35.035 [2024-11-20 18:31:53.576915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:35.035 [2024-11-20 18:31:53.576923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:35.035 [2024-11-20 18:31:53.576931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:35.035 [2024-11-20 18:31:53.576938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:35.035 [2024-11-20 18:31:53.576946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:35.035 [2024-11-20 18:31:53.576954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:35.035 [2024-11-20 18:31:53.576961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:35.035 [2024-11-20 18:31:53.576970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:35.035 [2024-11-20 18:31:53.576979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:35.035 [2024-11-20 18:31:53.576986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:35.035 [2024-11-20 18:31:53.576994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:35.035 [2024-11-20 18:31:53.577001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:35.035 [2024-11-20 18:31:53.577009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:35.035 [2024-11-20 18:31:53.577017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:35.035 [2024-11-20 18:31:53.577034] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:35.035 [2024-11-20 18:31:53.577043] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d1c29b40-f180-4470-a535-460b563a5775 00:22:35.035 [2024-11-20 18:31:53.577056] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:35.035 [2024-11-20 18:31:53.577063] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:35.035 [2024-11-20 18:31:53.577119] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:35.035 [2024-11-20 18:31:53.577132] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:35.035 [2024-11-20 18:31:53.577142] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:35.035 [2024-11-20 18:31:53.577155] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:35.035 [2024-11-20 18:31:53.577167] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:35.035 [2024-11-20 18:31:53.577187] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:35.035 [2024-11-20 18:31:53.577198] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:35.035 [2024-11-20 18:31:53.577210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.035 [2024-11-20 18:31:53.577231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:35.035 [2024-11-20 18:31:53.577247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.991 ms 00:22:35.035 [2024-11-20 18:31:53.577259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.035 [2024-11-20 18:31:53.591189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.035 [2024-11-20 18:31:53.591229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:35.035 [2024-11-20 18:31:53.591241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.877 ms 00:22:35.035 [2024-11-20 18:31:53.591249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.035 [2024-11-20 18:31:53.591640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.035 [2024-11-20 18:31:53.591650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:35.035 [2024-11-20 18:31:53.591660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.365 ms 00:22:35.035 [2024-11-20 18:31:53.591676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.035 [2024-11-20 18:31:53.629241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:35.035 [2024-11-20 18:31:53.629287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:35.035 [2024-11-20 18:31:53.629300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:35.035 [2024-11-20 18:31:53.629310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.035 [2024-11-20 18:31:53.629382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:35.035 [2024-11-20 18:31:53.629392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:35.035 [2024-11-20 18:31:53.629402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:35.035 [2024-11-20 18:31:53.629418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.035 [2024-11-20 18:31:53.629512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:35.035 [2024-11-20 18:31:53.629525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:35.035 [2024-11-20 18:31:53.629534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:35.035 [2024-11-20 18:31:53.629543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.035 [2024-11-20 18:31:53.629561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:35.035 [2024-11-20 18:31:53.629571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:35.035 [2024-11-20 18:31:53.629579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:35.035 [2024-11-20 18:31:53.629588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.296 [2024-11-20 18:31:53.715049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:35.296 [2024-11-20 18:31:53.715118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:35.296 [2024-11-20 18:31:53.715134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:35.296 [2024-11-20 18:31:53.715144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.296 [2024-11-20 18:31:53.785487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:35.296 [2024-11-20 18:31:53.785543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:35.296 [2024-11-20 18:31:53.785556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:35.296 [2024-11-20 18:31:53.785565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.296 [2024-11-20 18:31:53.785639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:35.296 [2024-11-20 18:31:53.785650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:35.296 [2024-11-20 18:31:53.785659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:35.296 [2024-11-20 18:31:53.785668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.296 [2024-11-20 18:31:53.785731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:35.296 [2024-11-20 18:31:53.785741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:35.296 [2024-11-20 18:31:53.785750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:35.296 [2024-11-20 18:31:53.785759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.296 [2024-11-20 18:31:53.785864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:35.296 [2024-11-20 18:31:53.785875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:35.296 [2024-11-20 18:31:53.785885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:35.296 [2024-11-20 18:31:53.785893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.296 [2024-11-20 18:31:53.785931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:35.296 [2024-11-20 18:31:53.785940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:35.296 [2024-11-20 18:31:53.785949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:35.296 [2024-11-20 18:31:53.785957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.296 [2024-11-20 18:31:53.786001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:35.296 [2024-11-20 18:31:53.786014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:35.296 [2024-11-20 18:31:53.786023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:35.296 [2024-11-20 18:31:53.786031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.296 [2024-11-20 18:31:53.786080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:35.296 [2024-11-20 18:31:53.786091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:35.296 [2024-11-20 18:31:53.786118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:35.296 [2024-11-20 18:31:53.786127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.296 [2024-11-20 18:31:53.786267] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 394.932 ms, result 0 00:22:36.238 00:22:36.238 00:22:36.238 18:31:54 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:22:36.238 [2024-11-20 18:31:54.710459] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:22:36.238 [2024-11-20 18:31:54.710607] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78103 ] 00:22:36.499 [2024-11-20 18:31:54.870459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:36.499 [2024-11-20 18:31:54.987497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:36.759 [2024-11-20 18:31:55.275663] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:36.759 [2024-11-20 18:31:55.275748] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:37.022 [2024-11-20 18:31:55.436994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.022 [2024-11-20 18:31:55.437058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:37.022 [2024-11-20 18:31:55.437087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:37.022 [2024-11-20 18:31:55.437115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.022 [2024-11-20 18:31:55.437174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.022 [2024-11-20 18:31:55.437186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:37.022 [2024-11-20 18:31:55.437198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:22:37.022 [2024-11-20 18:31:55.437206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.022 [2024-11-20 18:31:55.437228] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:37.022 [2024-11-20 18:31:55.437937] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:37.022 [2024-11-20 18:31:55.437964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.022 [2024-11-20 18:31:55.437972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:37.022 [2024-11-20 18:31:55.437982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.742 ms 00:22:37.022 [2024-11-20 18:31:55.437990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.022 [2024-11-20 18:31:55.439676] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:37.022 [2024-11-20 18:31:55.454014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.022 [2024-11-20 18:31:55.454061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:37.022 [2024-11-20 18:31:55.454075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.340 ms 00:22:37.022 [2024-11-20 18:31:55.454083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.022 [2024-11-20 18:31:55.454184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.022 [2024-11-20 18:31:55.454195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:37.022 [2024-11-20 18:31:55.454205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:22:37.022 [2024-11-20 18:31:55.454213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.022 [2024-11-20 18:31:55.462494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.022 [2024-11-20 18:31:55.462540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:37.022 [2024-11-20 18:31:55.462551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.201 ms 00:22:37.022 [2024-11-20 18:31:55.462559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.022 [2024-11-20 18:31:55.462646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.022 [2024-11-20 18:31:55.462655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:37.022 [2024-11-20 18:31:55.462665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:22:37.022 [2024-11-20 18:31:55.462673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.022 [2024-11-20 18:31:55.462718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.022 [2024-11-20 18:31:55.462729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:37.022 [2024-11-20 18:31:55.462738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:37.022 [2024-11-20 18:31:55.462745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.022 [2024-11-20 18:31:55.462768] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:37.022 [2024-11-20 18:31:55.466919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.022 [2024-11-20 18:31:55.466959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:37.022 [2024-11-20 18:31:55.466971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.156 ms 00:22:37.022 [2024-11-20 18:31:55.466982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.022 [2024-11-20 18:31:55.467018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.022 [2024-11-20 18:31:55.467026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:37.022 [2024-11-20 18:31:55.467035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:22:37.022 [2024-11-20 18:31:55.467042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.022 [2024-11-20 18:31:55.467108] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:37.022 [2024-11-20 18:31:55.467133] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:37.022 [2024-11-20 18:31:55.467171] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:37.022 [2024-11-20 18:31:55.467191] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:37.022 [2024-11-20 18:31:55.467298] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:37.022 [2024-11-20 18:31:55.467309] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:37.022 [2024-11-20 18:31:55.467321] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:37.022 [2024-11-20 18:31:55.467332] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:37.022 [2024-11-20 18:31:55.467342] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:37.022 [2024-11-20 18:31:55.467351] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:37.022 [2024-11-20 18:31:55.467359] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:37.022 [2024-11-20 18:31:55.467367] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:37.022 [2024-11-20 18:31:55.467375] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:37.022 [2024-11-20 18:31:55.467385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.022 [2024-11-20 18:31:55.467393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:37.022 [2024-11-20 18:31:55.467401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.280 ms 00:22:37.022 [2024-11-20 18:31:55.467408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.022 [2024-11-20 18:31:55.467490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.022 [2024-11-20 18:31:55.467499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:37.022 [2024-11-20 18:31:55.467508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:22:37.022 [2024-11-20 18:31:55.467515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.022 [2024-11-20 18:31:55.467621] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:37.022 [2024-11-20 18:31:55.467643] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:37.022 [2024-11-20 18:31:55.467652] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:37.022 [2024-11-20 18:31:55.467661] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:37.022 [2024-11-20 18:31:55.467669] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:37.022 [2024-11-20 18:31:55.467676] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:37.022 [2024-11-20 18:31:55.467684] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:37.022 [2024-11-20 18:31:55.467691] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:37.022 [2024-11-20 18:31:55.467699] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:37.022 [2024-11-20 18:31:55.467706] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:37.022 [2024-11-20 18:31:55.467712] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:37.022 [2024-11-20 18:31:55.467719] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:37.022 [2024-11-20 18:31:55.467726] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:37.022 [2024-11-20 18:31:55.467734] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:37.022 [2024-11-20 18:31:55.467743] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:37.022 [2024-11-20 18:31:55.467757] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:37.022 [2024-11-20 18:31:55.467763] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:37.022 [2024-11-20 18:31:55.467770] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:37.022 [2024-11-20 18:31:55.467776] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:37.022 [2024-11-20 18:31:55.467783] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:37.022 [2024-11-20 18:31:55.467791] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:37.022 [2024-11-20 18:31:55.467797] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:37.022 [2024-11-20 18:31:55.467804] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:37.022 [2024-11-20 18:31:55.467810] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:37.022 [2024-11-20 18:31:55.467816] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:37.022 [2024-11-20 18:31:55.467823] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:37.022 [2024-11-20 18:31:55.467829] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:37.022 [2024-11-20 18:31:55.467836] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:37.022 [2024-11-20 18:31:55.467843] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:37.022 [2024-11-20 18:31:55.467850] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:37.022 [2024-11-20 18:31:55.467857] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:37.022 [2024-11-20 18:31:55.467864] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:37.022 [2024-11-20 18:31:55.467871] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:37.022 [2024-11-20 18:31:55.467877] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:37.022 [2024-11-20 18:31:55.467883] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:37.022 [2024-11-20 18:31:55.467890] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:37.022 [2024-11-20 18:31:55.467897] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:37.022 [2024-11-20 18:31:55.467905] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:37.023 [2024-11-20 18:31:55.467912] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:37.023 [2024-11-20 18:31:55.467919] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:37.023 [2024-11-20 18:31:55.467925] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:37.023 [2024-11-20 18:31:55.467932] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:37.023 [2024-11-20 18:31:55.467939] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:37.023 [2024-11-20 18:31:55.467945] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:37.023 [2024-11-20 18:31:55.467953] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:37.023 [2024-11-20 18:31:55.467961] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:37.023 [2024-11-20 18:31:55.467970] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:37.023 [2024-11-20 18:31:55.467978] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:37.023 [2024-11-20 18:31:55.467986] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:37.023 [2024-11-20 18:31:55.467992] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:37.023 [2024-11-20 18:31:55.467999] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:37.023 [2024-11-20 18:31:55.468006] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:37.023 [2024-11-20 18:31:55.468014] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:37.023 [2024-11-20 18:31:55.468022] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:37.023 [2024-11-20 18:31:55.468031] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:37.023 [2024-11-20 18:31:55.468041] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:37.023 [2024-11-20 18:31:55.468048] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:37.023 [2024-11-20 18:31:55.468055] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:37.023 [2024-11-20 18:31:55.468063] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:37.023 [2024-11-20 18:31:55.468070] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:37.023 [2024-11-20 18:31:55.468078] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:37.023 [2024-11-20 18:31:55.468085] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:37.023 [2024-11-20 18:31:55.468109] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:37.023 [2024-11-20 18:31:55.468117] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:37.023 [2024-11-20 18:31:55.468125] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:37.023 [2024-11-20 18:31:55.468132] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:37.023 [2024-11-20 18:31:55.468140] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:37.023 [2024-11-20 18:31:55.468148] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:37.023 [2024-11-20 18:31:55.468156] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:37.023 [2024-11-20 18:31:55.468164] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:37.023 [2024-11-20 18:31:55.468176] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:37.023 [2024-11-20 18:31:55.468185] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:37.023 [2024-11-20 18:31:55.468193] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:37.023 [2024-11-20 18:31:55.468200] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:37.023 [2024-11-20 18:31:55.468209] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:37.023 [2024-11-20 18:31:55.468217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.023 [2024-11-20 18:31:55.468225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:37.023 [2024-11-20 18:31:55.468233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.665 ms 00:22:37.023 [2024-11-20 18:31:55.468244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.023 [2024-11-20 18:31:55.500245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.023 [2024-11-20 18:31:55.500296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:37.023 [2024-11-20 18:31:55.500308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.956 ms 00:22:37.023 [2024-11-20 18:31:55.500317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.023 [2024-11-20 18:31:55.500412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.023 [2024-11-20 18:31:55.500421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:37.023 [2024-11-20 18:31:55.500430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:22:37.023 [2024-11-20 18:31:55.500438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.023 [2024-11-20 18:31:55.546212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.023 [2024-11-20 18:31:55.546267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:37.023 [2024-11-20 18:31:55.546281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.714 ms 00:22:37.023 [2024-11-20 18:31:55.546290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.023 [2024-11-20 18:31:55.546341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.023 [2024-11-20 18:31:55.546352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:37.023 [2024-11-20 18:31:55.546362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:37.023 [2024-11-20 18:31:55.546374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.023 [2024-11-20 18:31:55.546988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.023 [2024-11-20 18:31:55.547034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:37.023 [2024-11-20 18:31:55.547046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.533 ms 00:22:37.023 [2024-11-20 18:31:55.547054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.023 [2024-11-20 18:31:55.547234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.023 [2024-11-20 18:31:55.547247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:37.023 [2024-11-20 18:31:55.547256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.149 ms 00:22:37.023 [2024-11-20 18:31:55.547271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.023 [2024-11-20 18:31:55.563276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.023 [2024-11-20 18:31:55.563323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:37.023 [2024-11-20 18:31:55.563337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.984 ms 00:22:37.023 [2024-11-20 18:31:55.563346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.023 [2024-11-20 18:31:55.577597] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:37.023 [2024-11-20 18:31:55.577645] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:37.023 [2024-11-20 18:31:55.577659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.023 [2024-11-20 18:31:55.577668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:37.023 [2024-11-20 18:31:55.577679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.201 ms 00:22:37.023 [2024-11-20 18:31:55.577686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.023 [2024-11-20 18:31:55.603950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.023 [2024-11-20 18:31:55.604010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:37.023 [2024-11-20 18:31:55.604023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.208 ms 00:22:37.023 [2024-11-20 18:31:55.604031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.023 [2024-11-20 18:31:55.617146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.023 [2024-11-20 18:31:55.617192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:37.023 [2024-11-20 18:31:55.617203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.052 ms 00:22:37.023 [2024-11-20 18:31:55.617211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.023 [2024-11-20 18:31:55.630013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.023 [2024-11-20 18:31:55.630073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:37.023 [2024-11-20 18:31:55.630086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.753 ms 00:22:37.023 [2024-11-20 18:31:55.630102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.023 [2024-11-20 18:31:55.630748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.023 [2024-11-20 18:31:55.630781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:37.023 [2024-11-20 18:31:55.630791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.531 ms 00:22:37.023 [2024-11-20 18:31:55.630802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.285 [2024-11-20 18:31:55.696402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.285 [2024-11-20 18:31:55.696470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:37.285 [2024-11-20 18:31:55.696494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.580 ms 00:22:37.285 [2024-11-20 18:31:55.696504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.285 [2024-11-20 18:31:55.707591] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:37.285 [2024-11-20 18:31:55.710616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.285 [2024-11-20 18:31:55.710666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:37.285 [2024-11-20 18:31:55.710679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.050 ms 00:22:37.285 [2024-11-20 18:31:55.710687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.285 [2024-11-20 18:31:55.710773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.285 [2024-11-20 18:31:55.710785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:37.285 [2024-11-20 18:31:55.710795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:22:37.285 [2024-11-20 18:31:55.710806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.285 [2024-11-20 18:31:55.710877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.285 [2024-11-20 18:31:55.710888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:37.285 [2024-11-20 18:31:55.710898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:22:37.285 [2024-11-20 18:31:55.710907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.285 [2024-11-20 18:31:55.710927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.285 [2024-11-20 18:31:55.710937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:37.285 [2024-11-20 18:31:55.710945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:37.285 [2024-11-20 18:31:55.710953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.285 [2024-11-20 18:31:55.710990] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:37.285 [2024-11-20 18:31:55.711005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.285 [2024-11-20 18:31:55.711013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:37.285 [2024-11-20 18:31:55.711021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:22:37.285 [2024-11-20 18:31:55.711030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.285 [2024-11-20 18:31:55.737319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.285 [2024-11-20 18:31:55.737370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:37.285 [2024-11-20 18:31:55.737384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.269 ms 00:22:37.285 [2024-11-20 18:31:55.737399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.285 [2024-11-20 18:31:55.737492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.285 [2024-11-20 18:31:55.737502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:37.285 [2024-11-20 18:31:55.737512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:22:37.285 [2024-11-20 18:31:55.737520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.285 [2024-11-20 18:31:55.739166] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 301.644 ms, result 0 00:22:38.715  [2024-11-20T18:31:58.284Z] Copying: 14/1024 [MB] (14 MBps) [2024-11-20T18:31:59.224Z] Copying: 31/1024 [MB] (17 MBps) [2024-11-20T18:32:00.163Z] Copying: 48/1024 [MB] (16 MBps) [2024-11-20T18:32:01.106Z] Copying: 58/1024 [MB] (10 MBps) [2024-11-20T18:32:02.047Z] Copying: 69/1024 [MB] (10 MBps) [2024-11-20T18:32:02.991Z] Copying: 79/1024 [MB] (10 MBps) [2024-11-20T18:32:03.933Z] Copying: 90/1024 [MB] (10 MBps) [2024-11-20T18:32:05.318Z] Copying: 100/1024 [MB] (10 MBps) [2024-11-20T18:32:06.263Z] Copying: 111/1024 [MB] (10 MBps) [2024-11-20T18:32:07.207Z] Copying: 127/1024 [MB] (16 MBps) [2024-11-20T18:32:08.153Z] Copying: 137/1024 [MB] (10 MBps) [2024-11-20T18:32:09.097Z] Copying: 148/1024 [MB] (10 MBps) [2024-11-20T18:32:10.044Z] Copying: 158/1024 [MB] (10 MBps) [2024-11-20T18:32:10.988Z] Copying: 169/1024 [MB] (10 MBps) [2024-11-20T18:32:11.932Z] Copying: 190/1024 [MB] (21 MBps) [2024-11-20T18:32:13.319Z] Copying: 201/1024 [MB] (10 MBps) [2024-11-20T18:32:14.264Z] Copying: 211/1024 [MB] (10 MBps) [2024-11-20T18:32:15.208Z] Copying: 222/1024 [MB] (10 MBps) [2024-11-20T18:32:16.146Z] Copying: 234/1024 [MB] (12 MBps) [2024-11-20T18:32:17.089Z] Copying: 251/1024 [MB] (17 MBps) [2024-11-20T18:32:18.044Z] Copying: 270/1024 [MB] (18 MBps) [2024-11-20T18:32:18.989Z] Copying: 289/1024 [MB] (19 MBps) [2024-11-20T18:32:19.927Z] Copying: 310/1024 [MB] (21 MBps) [2024-11-20T18:32:21.329Z] Copying: 335/1024 [MB] (25 MBps) [2024-11-20T18:32:22.275Z] Copying: 355/1024 [MB] (19 MBps) [2024-11-20T18:32:23.219Z] Copying: 366/1024 [MB] (10 MBps) [2024-11-20T18:32:24.161Z] Copying: 387/1024 [MB] (21 MBps) [2024-11-20T18:32:25.104Z] Copying: 412/1024 [MB] (24 MBps) [2024-11-20T18:32:26.077Z] Copying: 428/1024 [MB] (16 MBps) [2024-11-20T18:32:27.021Z] Copying: 453/1024 [MB] (25 MBps) [2024-11-20T18:32:27.965Z] Copying: 466/1024 [MB] (12 MBps) [2024-11-20T18:32:29.357Z] Copying: 485/1024 [MB] (19 MBps) [2024-11-20T18:32:29.927Z] Copying: 502/1024 [MB] (16 MBps) [2024-11-20T18:32:31.310Z] Copying: 521/1024 [MB] (18 MBps) [2024-11-20T18:32:32.254Z] Copying: 541/1024 [MB] (19 MBps) [2024-11-20T18:32:33.199Z] Copying: 558/1024 [MB] (16 MBps) [2024-11-20T18:32:34.144Z] Copying: 576/1024 [MB] (18 MBps) [2024-11-20T18:32:35.088Z] Copying: 595/1024 [MB] (19 MBps) [2024-11-20T18:32:36.033Z] Copying: 615/1024 [MB] (20 MBps) [2024-11-20T18:32:36.978Z] Copying: 626/1024 [MB] (10 MBps) [2024-11-20T18:32:38.365Z] Copying: 636/1024 [MB] (10 MBps) [2024-11-20T18:32:38.937Z] Copying: 657/1024 [MB] (21 MBps) [2024-11-20T18:32:40.326Z] Copying: 674/1024 [MB] (16 MBps) [2024-11-20T18:32:41.270Z] Copying: 685/1024 [MB] (10 MBps) [2024-11-20T18:32:42.213Z] Copying: 705/1024 [MB] (20 MBps) [2024-11-20T18:32:43.152Z] Copying: 719/1024 [MB] (13 MBps) [2024-11-20T18:32:44.092Z] Copying: 738/1024 [MB] (18 MBps) [2024-11-20T18:32:45.034Z] Copying: 757/1024 [MB] (19 MBps) [2024-11-20T18:32:45.977Z] Copying: 779/1024 [MB] (21 MBps) [2024-11-20T18:32:47.365Z] Copying: 798/1024 [MB] (19 MBps) [2024-11-20T18:32:47.957Z] Copying: 808/1024 [MB] (10 MBps) [2024-11-20T18:32:49.346Z] Copying: 819/1024 [MB] (10 MBps) [2024-11-20T18:32:50.290Z] Copying: 829/1024 [MB] (10 MBps) [2024-11-20T18:32:51.232Z] Copying: 840/1024 [MB] (10 MBps) [2024-11-20T18:32:52.179Z] Copying: 851/1024 [MB] (10 MBps) [2024-11-20T18:32:53.124Z] Copying: 862/1024 [MB] (10 MBps) [2024-11-20T18:32:54.071Z] Copying: 872/1024 [MB] (10 MBps) [2024-11-20T18:32:55.087Z] Copying: 883/1024 [MB] (10 MBps) [2024-11-20T18:32:56.030Z] Copying: 897/1024 [MB] (13 MBps) [2024-11-20T18:32:56.971Z] Copying: 915/1024 [MB] (18 MBps) [2024-11-20T18:32:58.355Z] Copying: 926/1024 [MB] (10 MBps) [2024-11-20T18:32:58.928Z] Copying: 937/1024 [MB] (10 MBps) [2024-11-20T18:33:00.312Z] Copying: 947/1024 [MB] (10 MBps) [2024-11-20T18:33:01.256Z] Copying: 961/1024 [MB] (13 MBps) [2024-11-20T18:33:02.200Z] Copying: 977/1024 [MB] (16 MBps) [2024-11-20T18:33:03.143Z] Copying: 998/1024 [MB] (21 MBps) [2024-11-20T18:33:04.086Z] Copying: 1010/1024 [MB] (11 MBps) [2024-11-20T18:33:04.349Z] Copying: 1021/1024 [MB] (10 MBps) [2024-11-20T18:33:04.349Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-20 18:33:04.169211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.720 [2024-11-20 18:33:04.169457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:45.720 [2024-11-20 18:33:04.169483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:45.720 [2024-11-20 18:33:04.169493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.720 [2024-11-20 18:33:04.169527] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:45.720 [2024-11-20 18:33:04.172926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.720 [2024-11-20 18:33:04.172985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:45.720 [2024-11-20 18:33:04.173007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.380 ms 00:23:45.720 [2024-11-20 18:33:04.173015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.720 [2024-11-20 18:33:04.173294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.720 [2024-11-20 18:33:04.173309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:45.720 [2024-11-20 18:33:04.173320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.222 ms 00:23:45.720 [2024-11-20 18:33:04.173328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.720 [2024-11-20 18:33:04.176782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.720 [2024-11-20 18:33:04.176945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:45.720 [2024-11-20 18:33:04.176963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.440 ms 00:23:45.720 [2024-11-20 18:33:04.176971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.720 [2024-11-20 18:33:04.183548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.720 [2024-11-20 18:33:04.183590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:45.720 [2024-11-20 18:33:04.183601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.547 ms 00:23:45.720 [2024-11-20 18:33:04.183610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.720 [2024-11-20 18:33:04.211423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.720 [2024-11-20 18:33:04.211610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:45.720 [2024-11-20 18:33:04.211687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.751 ms 00:23:45.720 [2024-11-20 18:33:04.211712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.720 [2024-11-20 18:33:04.228186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.720 [2024-11-20 18:33:04.228369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:45.720 [2024-11-20 18:33:04.228394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.420 ms 00:23:45.720 [2024-11-20 18:33:04.228403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.720 [2024-11-20 18:33:04.228582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.720 [2024-11-20 18:33:04.228602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:45.720 [2024-11-20 18:33:04.228612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:23:45.720 [2024-11-20 18:33:04.228620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.720 [2024-11-20 18:33:04.255508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.720 [2024-11-20 18:33:04.255694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:45.720 [2024-11-20 18:33:04.255716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.871 ms 00:23:45.720 [2024-11-20 18:33:04.255725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.720 [2024-11-20 18:33:04.281845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.720 [2024-11-20 18:33:04.281904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:45.720 [2024-11-20 18:33:04.281916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.035 ms 00:23:45.720 [2024-11-20 18:33:04.281923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.720 [2024-11-20 18:33:04.307792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.720 [2024-11-20 18:33:04.307835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:45.720 [2024-11-20 18:33:04.307848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.819 ms 00:23:45.720 [2024-11-20 18:33:04.307856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.720 [2024-11-20 18:33:04.333213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.720 [2024-11-20 18:33:04.333413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:45.720 [2024-11-20 18:33:04.333435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.265 ms 00:23:45.720 [2024-11-20 18:33:04.333443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.720 [2024-11-20 18:33:04.333480] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:45.720 [2024-11-20 18:33:04.333496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:45.720 [2024-11-20 18:33:04.333516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:45.720 [2024-11-20 18:33:04.333525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:45.720 [2024-11-20 18:33:04.333533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:45.720 [2024-11-20 18:33:04.333542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:45.720 [2024-11-20 18:33:04.333550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:45.720 [2024-11-20 18:33:04.333558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:45.720 [2024-11-20 18:33:04.333566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:45.720 [2024-11-20 18:33:04.333574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:45.720 [2024-11-20 18:33:04.333582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:45.720 [2024-11-20 18:33:04.333590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:45.720 [2024-11-20 18:33:04.333597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:45.720 [2024-11-20 18:33:04.333606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:45.720 [2024-11-20 18:33:04.333614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:45.720 [2024-11-20 18:33:04.333621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:45.720 [2024-11-20 18:33:04.333630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:45.720 [2024-11-20 18:33:04.333637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:45.720 [2024-11-20 18:33:04.333644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:45.720 [2024-11-20 18:33:04.333653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:45.720 [2024-11-20 18:33:04.333660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:45.720 [2024-11-20 18:33:04.333668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:45.720 [2024-11-20 18:33:04.333675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:45.720 [2024-11-20 18:33:04.333683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:45.720 [2024-11-20 18:33:04.333690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:45.721 [2024-11-20 18:33:04.333698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:45.721 [2024-11-20 18:33:04.333705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:45.721 [2024-11-20 18:33:04.333714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:45.721 [2024-11-20 18:33:04.333722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:45.721 [2024-11-20 18:33:04.333729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:45.721 [2024-11-20 18:33:04.333738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:45.721 [2024-11-20 18:33:04.333745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:45.721 [2024-11-20 18:33:04.333753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:45.721 [2024-11-20 18:33:04.333761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:45.721 [2024-11-20 18:33:04.333768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:45.721 [2024-11-20 18:33:04.333776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:45.721 [2024-11-20 18:33:04.333784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:45.721 [2024-11-20 18:33:04.333792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:45.721 [2024-11-20 18:33:04.333799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:45.721 [2024-11-20 18:33:04.333807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:45.721 [2024-11-20 18:33:04.333815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:45.721 [2024-11-20 18:33:04.333823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:45.721 [2024-11-20 18:33:04.333831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:45.721 [2024-11-20 18:33:04.333841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:45.721 [2024-11-20 18:33:04.333849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:45.721 [2024-11-20 18:33:04.333857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:45.721 [2024-11-20 18:33:04.333866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:45.721 [2024-11-20 18:33:04.333873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:45.721 [2024-11-20 18:33:04.333881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:45.721 [2024-11-20 18:33:04.333889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:45.721 [2024-11-20 18:33:04.333897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:45.721 [2024-11-20 18:33:04.333905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:45.721 [2024-11-20 18:33:04.333913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:45.721 [2024-11-20 18:33:04.333920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:45.721 [2024-11-20 18:33:04.333928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:45.721 [2024-11-20 18:33:04.333936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:45.721 [2024-11-20 18:33:04.333944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:45.721 [2024-11-20 18:33:04.333952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:45.721 [2024-11-20 18:33:04.333961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:45.721 [2024-11-20 18:33:04.333968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:45.721 [2024-11-20 18:33:04.333975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:45.721 [2024-11-20 18:33:04.333984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:45.721 [2024-11-20 18:33:04.333991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:45.721 [2024-11-20 18:33:04.333999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:45.721 [2024-11-20 18:33:04.334006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:45.721 [2024-11-20 18:33:04.334014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:45.721 [2024-11-20 18:33:04.334021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:45.721 [2024-11-20 18:33:04.334028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:45.721 [2024-11-20 18:33:04.334036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:45.721 [2024-11-20 18:33:04.334043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:45.721 [2024-11-20 18:33:04.334051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:45.721 [2024-11-20 18:33:04.334058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:45.721 [2024-11-20 18:33:04.334066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:45.721 [2024-11-20 18:33:04.334073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:45.721 [2024-11-20 18:33:04.334080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:45.721 [2024-11-20 18:33:04.334090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:45.721 [2024-11-20 18:33:04.334121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:45.721 [2024-11-20 18:33:04.334130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:45.721 [2024-11-20 18:33:04.334139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:45.721 [2024-11-20 18:33:04.334147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:45.721 [2024-11-20 18:33:04.334155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:45.721 [2024-11-20 18:33:04.334163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:45.721 [2024-11-20 18:33:04.334171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:45.721 [2024-11-20 18:33:04.334179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:45.721 [2024-11-20 18:33:04.334187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:45.721 [2024-11-20 18:33:04.334195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:45.721 [2024-11-20 18:33:04.334204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:45.721 [2024-11-20 18:33:04.334213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:45.721 [2024-11-20 18:33:04.334221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:45.721 [2024-11-20 18:33:04.334228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:45.721 [2024-11-20 18:33:04.334237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:45.721 [2024-11-20 18:33:04.334245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:45.721 [2024-11-20 18:33:04.334253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:45.721 [2024-11-20 18:33:04.334260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:45.721 [2024-11-20 18:33:04.334268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:45.721 [2024-11-20 18:33:04.334277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:45.721 [2024-11-20 18:33:04.334285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:45.721 [2024-11-20 18:33:04.334293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:45.721 [2024-11-20 18:33:04.334300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:45.721 [2024-11-20 18:33:04.334308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:45.721 [2024-11-20 18:33:04.334316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:45.721 [2024-11-20 18:33:04.334332] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:45.721 [2024-11-20 18:33:04.334344] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d1c29b40-f180-4470-a535-460b563a5775 00:23:45.721 [2024-11-20 18:33:04.334353] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:45.721 [2024-11-20 18:33:04.334361] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:45.721 [2024-11-20 18:33:04.334368] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:45.721 [2024-11-20 18:33:04.334376] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:45.721 [2024-11-20 18:33:04.334384] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:45.721 [2024-11-20 18:33:04.334391] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:45.721 [2024-11-20 18:33:04.334407] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:45.721 [2024-11-20 18:33:04.334414] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:45.721 [2024-11-20 18:33:04.334420] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:45.721 [2024-11-20 18:33:04.334437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.721 [2024-11-20 18:33:04.334446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:45.721 [2024-11-20 18:33:04.334455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.958 ms 00:23:45.721 [2024-11-20 18:33:04.334462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.983 [2024-11-20 18:33:04.348089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.983 [2024-11-20 18:33:04.348143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:45.983 [2024-11-20 18:33:04.348155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.571 ms 00:23:45.983 [2024-11-20 18:33:04.348163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.983 [2024-11-20 18:33:04.348571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.983 [2024-11-20 18:33:04.348586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:45.983 [2024-11-20 18:33:04.348596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.383 ms 00:23:45.983 [2024-11-20 18:33:04.348611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.983 [2024-11-20 18:33:04.385390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:45.983 [2024-11-20 18:33:04.385441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:45.983 [2024-11-20 18:33:04.385454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:45.983 [2024-11-20 18:33:04.385464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.983 [2024-11-20 18:33:04.385539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:45.983 [2024-11-20 18:33:04.385549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:45.983 [2024-11-20 18:33:04.385559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:45.983 [2024-11-20 18:33:04.385575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.983 [2024-11-20 18:33:04.385646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:45.983 [2024-11-20 18:33:04.385657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:45.983 [2024-11-20 18:33:04.385667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:45.983 [2024-11-20 18:33:04.385676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.983 [2024-11-20 18:33:04.385694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:45.983 [2024-11-20 18:33:04.385703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:45.983 [2024-11-20 18:33:04.385712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:45.983 [2024-11-20 18:33:04.385721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.983 [2024-11-20 18:33:04.470922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:45.983 [2024-11-20 18:33:04.470981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:45.983 [2024-11-20 18:33:04.470995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:45.983 [2024-11-20 18:33:04.471005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.983 [2024-11-20 18:33:04.540835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:45.983 [2024-11-20 18:33:04.540895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:45.983 [2024-11-20 18:33:04.540908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:45.983 [2024-11-20 18:33:04.540917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.983 [2024-11-20 18:33:04.541006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:45.983 [2024-11-20 18:33:04.541017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:45.983 [2024-11-20 18:33:04.541026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:45.983 [2024-11-20 18:33:04.541034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.983 [2024-11-20 18:33:04.541073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:45.983 [2024-11-20 18:33:04.541082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:45.983 [2024-11-20 18:33:04.541091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:45.983 [2024-11-20 18:33:04.541129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.983 [2024-11-20 18:33:04.541235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:45.983 [2024-11-20 18:33:04.541245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:45.983 [2024-11-20 18:33:04.541253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:45.983 [2024-11-20 18:33:04.541261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.983 [2024-11-20 18:33:04.541313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:45.983 [2024-11-20 18:33:04.541323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:45.983 [2024-11-20 18:33:04.541332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:45.983 [2024-11-20 18:33:04.541340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.983 [2024-11-20 18:33:04.541382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:45.983 [2024-11-20 18:33:04.541394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:45.983 [2024-11-20 18:33:04.541404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:45.983 [2024-11-20 18:33:04.541413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.983 [2024-11-20 18:33:04.541462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:45.983 [2024-11-20 18:33:04.541472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:45.983 [2024-11-20 18:33:04.541480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:45.983 [2024-11-20 18:33:04.541488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.983 [2024-11-20 18:33:04.541626] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 372.374 ms, result 0 00:23:46.927 00:23:46.927 00:23:46.927 18:33:05 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:49.468 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:23:49.468 18:33:07 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:23:49.468 [2024-11-20 18:33:07.591562] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:23:49.468 [2024-11-20 18:33:07.591698] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78847 ] 00:23:49.468 [2024-11-20 18:33:07.750397] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:49.468 [2024-11-20 18:33:07.877176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:49.729 [2024-11-20 18:33:08.166207] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:49.729 [2024-11-20 18:33:08.166283] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:49.729 [2024-11-20 18:33:08.327499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.729 [2024-11-20 18:33:08.327559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:49.729 [2024-11-20 18:33:08.327581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:49.729 [2024-11-20 18:33:08.327589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.729 [2024-11-20 18:33:08.327647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.729 [2024-11-20 18:33:08.327658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:49.729 [2024-11-20 18:33:08.327670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:23:49.729 [2024-11-20 18:33:08.327678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.729 [2024-11-20 18:33:08.327699] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:49.729 [2024-11-20 18:33:08.328461] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:49.729 [2024-11-20 18:33:08.328481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.729 [2024-11-20 18:33:08.328490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:49.729 [2024-11-20 18:33:08.328499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.787 ms 00:23:49.729 [2024-11-20 18:33:08.328507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.729 [2024-11-20 18:33:08.330274] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:49.729 [2024-11-20 18:33:08.344653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.729 [2024-11-20 18:33:08.344853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:49.729 [2024-11-20 18:33:08.344877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.380 ms 00:23:49.729 [2024-11-20 18:33:08.344886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.729 [2024-11-20 18:33:08.345057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.729 [2024-11-20 18:33:08.345083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:49.729 [2024-11-20 18:33:08.345121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:23:49.729 [2024-11-20 18:33:08.345131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.729 [2024-11-20 18:33:08.353600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.729 [2024-11-20 18:33:08.353641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:49.729 [2024-11-20 18:33:08.353651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.382 ms 00:23:49.729 [2024-11-20 18:33:08.353659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.729 [2024-11-20 18:33:08.353747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.729 [2024-11-20 18:33:08.353757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:49.729 [2024-11-20 18:33:08.353766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:23:49.729 [2024-11-20 18:33:08.353774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.729 [2024-11-20 18:33:08.353825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.729 [2024-11-20 18:33:08.353835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:49.729 [2024-11-20 18:33:08.353845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:49.729 [2024-11-20 18:33:08.353852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.729 [2024-11-20 18:33:08.353876] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:49.992 [2024-11-20 18:33:08.358184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.992 [2024-11-20 18:33:08.358220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:49.992 [2024-11-20 18:33:08.358231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.313 ms 00:23:49.992 [2024-11-20 18:33:08.358243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.992 [2024-11-20 18:33:08.358280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.992 [2024-11-20 18:33:08.358290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:49.992 [2024-11-20 18:33:08.358298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:23:49.992 [2024-11-20 18:33:08.358306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.992 [2024-11-20 18:33:08.358357] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:49.992 [2024-11-20 18:33:08.358380] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:49.992 [2024-11-20 18:33:08.358417] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:49.992 [2024-11-20 18:33:08.358436] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:49.992 [2024-11-20 18:33:08.358542] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:49.992 [2024-11-20 18:33:08.358553] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:49.992 [2024-11-20 18:33:08.358564] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:49.992 [2024-11-20 18:33:08.358575] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:49.992 [2024-11-20 18:33:08.358585] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:49.992 [2024-11-20 18:33:08.358593] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:49.992 [2024-11-20 18:33:08.358601] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:49.992 [2024-11-20 18:33:08.358609] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:49.992 [2024-11-20 18:33:08.358616] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:49.992 [2024-11-20 18:33:08.358629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.992 [2024-11-20 18:33:08.358637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:49.992 [2024-11-20 18:33:08.358645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:23:49.992 [2024-11-20 18:33:08.358653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.992 [2024-11-20 18:33:08.358738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.992 [2024-11-20 18:33:08.358747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:49.992 [2024-11-20 18:33:08.358755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:23:49.992 [2024-11-20 18:33:08.358762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.992 [2024-11-20 18:33:08.358868] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:49.992 [2024-11-20 18:33:08.358882] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:49.992 [2024-11-20 18:33:08.358891] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:49.992 [2024-11-20 18:33:08.358899] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:49.992 [2024-11-20 18:33:08.358908] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:49.992 [2024-11-20 18:33:08.358914] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:49.992 [2024-11-20 18:33:08.358921] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:49.992 [2024-11-20 18:33:08.358929] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:49.992 [2024-11-20 18:33:08.358936] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:49.992 [2024-11-20 18:33:08.358943] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:49.992 [2024-11-20 18:33:08.358950] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:49.992 [2024-11-20 18:33:08.358957] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:49.992 [2024-11-20 18:33:08.358963] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:49.992 [2024-11-20 18:33:08.358973] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:49.992 [2024-11-20 18:33:08.358981] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:49.992 [2024-11-20 18:33:08.358994] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:49.992 [2024-11-20 18:33:08.359002] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:49.992 [2024-11-20 18:33:08.359008] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:49.992 [2024-11-20 18:33:08.359015] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:49.992 [2024-11-20 18:33:08.359022] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:49.992 [2024-11-20 18:33:08.359029] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:49.992 [2024-11-20 18:33:08.359036] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:49.992 [2024-11-20 18:33:08.359043] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:49.992 [2024-11-20 18:33:08.359050] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:49.992 [2024-11-20 18:33:08.359057] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:49.992 [2024-11-20 18:33:08.359064] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:49.992 [2024-11-20 18:33:08.359070] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:49.992 [2024-11-20 18:33:08.359076] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:49.992 [2024-11-20 18:33:08.359083] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:49.992 [2024-11-20 18:33:08.359089] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:49.992 [2024-11-20 18:33:08.359122] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:49.992 [2024-11-20 18:33:08.359129] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:49.992 [2024-11-20 18:33:08.359136] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:49.992 [2024-11-20 18:33:08.359142] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:49.992 [2024-11-20 18:33:08.359149] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:49.992 [2024-11-20 18:33:08.359155] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:49.992 [2024-11-20 18:33:08.359162] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:49.992 [2024-11-20 18:33:08.359169] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:49.992 [2024-11-20 18:33:08.359176] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:49.992 [2024-11-20 18:33:08.359182] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:49.992 [2024-11-20 18:33:08.359188] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:49.992 [2024-11-20 18:33:08.359195] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:49.992 [2024-11-20 18:33:08.359203] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:49.992 [2024-11-20 18:33:08.359210] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:49.992 [2024-11-20 18:33:08.359218] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:49.993 [2024-11-20 18:33:08.359227] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:49.993 [2024-11-20 18:33:08.359236] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:49.993 [2024-11-20 18:33:08.359244] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:49.993 [2024-11-20 18:33:08.359251] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:49.993 [2024-11-20 18:33:08.359258] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:49.993 [2024-11-20 18:33:08.359266] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:49.993 [2024-11-20 18:33:08.359273] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:49.993 [2024-11-20 18:33:08.359280] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:49.993 [2024-11-20 18:33:08.359289] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:49.993 [2024-11-20 18:33:08.359299] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:49.993 [2024-11-20 18:33:08.359307] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:49.993 [2024-11-20 18:33:08.359314] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:49.993 [2024-11-20 18:33:08.359321] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:49.993 [2024-11-20 18:33:08.359328] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:49.993 [2024-11-20 18:33:08.359336] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:49.993 [2024-11-20 18:33:08.359343] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:49.993 [2024-11-20 18:33:08.359350] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:49.993 [2024-11-20 18:33:08.359357] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:49.993 [2024-11-20 18:33:08.359364] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:49.993 [2024-11-20 18:33:08.359371] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:49.993 [2024-11-20 18:33:08.359378] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:49.993 [2024-11-20 18:33:08.359385] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:49.993 [2024-11-20 18:33:08.359391] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:49.993 [2024-11-20 18:33:08.359399] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:49.993 [2024-11-20 18:33:08.359406] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:49.993 [2024-11-20 18:33:08.359417] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:49.993 [2024-11-20 18:33:08.359426] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:49.993 [2024-11-20 18:33:08.359433] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:49.993 [2024-11-20 18:33:08.359440] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:49.993 [2024-11-20 18:33:08.359447] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:49.993 [2024-11-20 18:33:08.359455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.993 [2024-11-20 18:33:08.359463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:49.993 [2024-11-20 18:33:08.359472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.656 ms 00:23:49.993 [2024-11-20 18:33:08.359480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.993 [2024-11-20 18:33:08.391721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.993 [2024-11-20 18:33:08.391899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:49.993 [2024-11-20 18:33:08.391960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.196 ms 00:23:49.993 [2024-11-20 18:33:08.391984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.993 [2024-11-20 18:33:08.392109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.993 [2024-11-20 18:33:08.392133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:49.993 [2024-11-20 18:33:08.392155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:23:49.993 [2024-11-20 18:33:08.392174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.993 [2024-11-20 18:33:08.439699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.993 [2024-11-20 18:33:08.439893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:49.993 [2024-11-20 18:33:08.439962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.449 ms 00:23:49.993 [2024-11-20 18:33:08.439987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.993 [2024-11-20 18:33:08.440048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.993 [2024-11-20 18:33:08.440074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:49.993 [2024-11-20 18:33:08.440112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:49.993 [2024-11-20 18:33:08.440142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.993 [2024-11-20 18:33:08.440726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.993 [2024-11-20 18:33:08.440866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:49.993 [2024-11-20 18:33:08.440929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.491 ms 00:23:49.993 [2024-11-20 18:33:08.440953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.993 [2024-11-20 18:33:08.441308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.993 [2024-11-20 18:33:08.441748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:49.993 [2024-11-20 18:33:08.442231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.147 ms 00:23:49.993 [2024-11-20 18:33:08.442656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.993 [2024-11-20 18:33:08.458410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.993 [2024-11-20 18:33:08.458455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:49.993 [2024-11-20 18:33:08.458472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.687 ms 00:23:49.993 [2024-11-20 18:33:08.458481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.993 [2024-11-20 18:33:08.472808] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:49.993 [2024-11-20 18:33:08.472996] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:49.993 [2024-11-20 18:33:08.473017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.993 [2024-11-20 18:33:08.473026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:49.993 [2024-11-20 18:33:08.473037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.420 ms 00:23:49.993 [2024-11-20 18:33:08.473045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.993 [2024-11-20 18:33:08.498877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.993 [2024-11-20 18:33:08.498933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:49.993 [2024-11-20 18:33:08.498945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.680 ms 00:23:49.993 [2024-11-20 18:33:08.498953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.993 [2024-11-20 18:33:08.511954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.993 [2024-11-20 18:33:08.511999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:49.993 [2024-11-20 18:33:08.512010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.947 ms 00:23:49.993 [2024-11-20 18:33:08.512018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.993 [2024-11-20 18:33:08.524714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.993 [2024-11-20 18:33:08.524756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:49.993 [2024-11-20 18:33:08.524768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.649 ms 00:23:49.993 [2024-11-20 18:33:08.524775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.993 [2024-11-20 18:33:08.525586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.993 [2024-11-20 18:33:08.525613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:49.993 [2024-11-20 18:33:08.525624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.698 ms 00:23:49.993 [2024-11-20 18:33:08.525635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.993 [2024-11-20 18:33:08.590742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.993 [2024-11-20 18:33:08.590982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:49.993 [2024-11-20 18:33:08.591014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.087 ms 00:23:49.993 [2024-11-20 18:33:08.591024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.993 [2024-11-20 18:33:08.602566] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:49.993 [2024-11-20 18:33:08.605589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.993 [2024-11-20 18:33:08.605749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:49.993 [2024-11-20 18:33:08.605768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.517 ms 00:23:49.993 [2024-11-20 18:33:08.605778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.993 [2024-11-20 18:33:08.605869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.993 [2024-11-20 18:33:08.605881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:49.993 [2024-11-20 18:33:08.605891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:23:49.993 [2024-11-20 18:33:08.605903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.993 [2024-11-20 18:33:08.605976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.993 [2024-11-20 18:33:08.605986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:49.993 [2024-11-20 18:33:08.605996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:23:49.993 [2024-11-20 18:33:08.606004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.993 [2024-11-20 18:33:08.606026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.993 [2024-11-20 18:33:08.606034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:49.994 [2024-11-20 18:33:08.606043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:49.994 [2024-11-20 18:33:08.606052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.994 [2024-11-20 18:33:08.606086] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:49.994 [2024-11-20 18:33:08.606116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.994 [2024-11-20 18:33:08.606124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:49.994 [2024-11-20 18:33:08.606133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:23:49.994 [2024-11-20 18:33:08.606141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.254 [2024-11-20 18:33:08.632289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.254 [2024-11-20 18:33:08.632462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:50.254 [2024-11-20 18:33:08.632484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.129 ms 00:23:50.254 [2024-11-20 18:33:08.632501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.254 [2024-11-20 18:33:08.632582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.254 [2024-11-20 18:33:08.632592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:50.254 [2024-11-20 18:33:08.632602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:23:50.254 [2024-11-20 18:33:08.632610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.254 [2024-11-20 18:33:08.634343] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 306.313 ms, result 0 00:23:51.195  [2024-11-20T18:33:10.766Z] Copying: 14/1024 [MB] (14 MBps) [2024-11-20T18:33:11.709Z] Copying: 24/1024 [MB] (10 MBps) [2024-11-20T18:33:13.085Z] Copying: 35576/1048576 [kB] (10232 kBps) [2024-11-20T18:33:13.654Z] Copying: 59/1024 [MB] (25 MBps) [2024-11-20T18:33:15.040Z] Copying: 95/1024 [MB] (36 MBps) [2024-11-20T18:33:15.983Z] Copying: 117/1024 [MB] (21 MBps) [2024-11-20T18:33:16.921Z] Copying: 133/1024 [MB] (16 MBps) [2024-11-20T18:33:17.862Z] Copying: 161/1024 [MB] (27 MBps) [2024-11-20T18:33:18.805Z] Copying: 175/1024 [MB] (13 MBps) [2024-11-20T18:33:19.747Z] Copying: 191/1024 [MB] (15 MBps) [2024-11-20T18:33:20.689Z] Copying: 208/1024 [MB] (17 MBps) [2024-11-20T18:33:22.073Z] Copying: 223/1024 [MB] (15 MBps) [2024-11-20T18:33:23.014Z] Copying: 241/1024 [MB] (17 MBps) [2024-11-20T18:33:24.018Z] Copying: 262/1024 [MB] (20 MBps) [2024-11-20T18:33:24.961Z] Copying: 279/1024 [MB] (17 MBps) [2024-11-20T18:33:25.903Z] Copying: 294/1024 [MB] (14 MBps) [2024-11-20T18:33:26.848Z] Copying: 307/1024 [MB] (13 MBps) [2024-11-20T18:33:27.792Z] Copying: 321/1024 [MB] (13 MBps) [2024-11-20T18:33:28.737Z] Copying: 331/1024 [MB] (10 MBps) [2024-11-20T18:33:29.691Z] Copying: 350/1024 [MB] (18 MBps) [2024-11-20T18:33:31.079Z] Copying: 369016/1048576 [kB] (10140 kBps) [2024-11-20T18:33:31.652Z] Copying: 370/1024 [MB] (10 MBps) [2024-11-20T18:33:33.038Z] Copying: 388/1024 [MB] (17 MBps) [2024-11-20T18:33:33.980Z] Copying: 400/1024 [MB] (11 MBps) [2024-11-20T18:33:34.925Z] Copying: 410/1024 [MB] (10 MBps) [2024-11-20T18:33:35.871Z] Copying: 420/1024 [MB] (10 MBps) [2024-11-20T18:33:36.811Z] Copying: 431/1024 [MB] (10 MBps) [2024-11-20T18:33:37.745Z] Copying: 441/1024 [MB] (10 MBps) [2024-11-20T18:33:38.677Z] Copying: 473/1024 [MB] (32 MBps) [2024-11-20T18:33:40.051Z] Copying: 509/1024 [MB] (36 MBps) [2024-11-20T18:33:41.001Z] Copying: 537/1024 [MB] (27 MBps) [2024-11-20T18:33:41.945Z] Copying: 559/1024 [MB] (21 MBps) [2024-11-20T18:33:42.896Z] Copying: 574/1024 [MB] (15 MBps) [2024-11-20T18:33:43.840Z] Copying: 591/1024 [MB] (16 MBps) [2024-11-20T18:33:44.787Z] Copying: 607/1024 [MB] (15 MBps) [2024-11-20T18:33:45.731Z] Copying: 622/1024 [MB] (14 MBps) [2024-11-20T18:33:46.674Z] Copying: 635/1024 [MB] (13 MBps) [2024-11-20T18:33:48.058Z] Copying: 653/1024 [MB] (17 MBps) [2024-11-20T18:33:49.003Z] Copying: 676/1024 [MB] (22 MBps) [2024-11-20T18:33:49.946Z] Copying: 692/1024 [MB] (15 MBps) [2024-11-20T18:33:50.889Z] Copying: 708/1024 [MB] (16 MBps) [2024-11-20T18:33:51.831Z] Copying: 727/1024 [MB] (18 MBps) [2024-11-20T18:33:52.841Z] Copying: 746/1024 [MB] (18 MBps) [2024-11-20T18:33:53.788Z] Copying: 765/1024 [MB] (19 MBps) [2024-11-20T18:33:54.732Z] Copying: 776/1024 [MB] (11 MBps) [2024-11-20T18:33:55.677Z] Copying: 791/1024 [MB] (14 MBps) [2024-11-20T18:33:57.063Z] Copying: 810/1024 [MB] (19 MBps) [2024-11-20T18:33:58.020Z] Copying: 820/1024 [MB] (10 MBps) [2024-11-20T18:33:58.963Z] Copying: 831/1024 [MB] (10 MBps) [2024-11-20T18:33:59.909Z] Copying: 841/1024 [MB] (10 MBps) [2024-11-20T18:34:00.862Z] Copying: 851/1024 [MB] (10 MBps) [2024-11-20T18:34:01.808Z] Copying: 882496/1048576 [kB] (10192 kBps) [2024-11-20T18:34:02.749Z] Copying: 892668/1048576 [kB] (10172 kBps) [2024-11-20T18:34:03.693Z] Copying: 902840/1048576 [kB] (10172 kBps) [2024-11-20T18:34:05.081Z] Copying: 897/1024 [MB] (15 MBps) [2024-11-20T18:34:05.654Z] Copying: 908/1024 [MB] (11 MBps) [2024-11-20T18:34:07.037Z] Copying: 926/1024 [MB] (18 MBps) [2024-11-20T18:34:07.978Z] Copying: 952/1024 [MB] (25 MBps) [2024-11-20T18:34:08.917Z] Copying: 964/1024 [MB] (12 MBps) [2024-11-20T18:34:09.857Z] Copying: 987/1024 [MB] (22 MBps) [2024-11-20T18:34:10.801Z] Copying: 1005/1024 [MB] (18 MBps) [2024-11-20T18:34:11.747Z] Copying: 1022/1024 [MB] (16 MBps) [2024-11-20T18:34:11.747Z] Copying: 1048536/1048576 [kB] (1608 kBps) [2024-11-20T18:34:11.747Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-11-20 18:34:11.701485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.118 [2024-11-20 18:34:11.701691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:53.118 [2024-11-20 18:34:11.701772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:53.118 [2024-11-20 18:34:11.701816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.118 [2024-11-20 18:34:11.703914] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:53.118 [2024-11-20 18:34:11.710358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.118 [2024-11-20 18:34:11.710516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:53.118 [2024-11-20 18:34:11.710716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.269 ms 00:24:53.118 [2024-11-20 18:34:11.710729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.118 [2024-11-20 18:34:11.723278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.118 [2024-11-20 18:34:11.723326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:53.118 [2024-11-20 18:34:11.723339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.623 ms 00:24:53.118 [2024-11-20 18:34:11.723347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.378 [2024-11-20 18:34:11.746533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.378 [2024-11-20 18:34:11.746582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:53.378 [2024-11-20 18:34:11.746595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.160 ms 00:24:53.378 [2024-11-20 18:34:11.746604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.378 [2024-11-20 18:34:11.752967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.378 [2024-11-20 18:34:11.753148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:53.378 [2024-11-20 18:34:11.753169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.336 ms 00:24:53.378 [2024-11-20 18:34:11.753179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.378 [2024-11-20 18:34:11.779722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.378 [2024-11-20 18:34:11.779770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:53.378 [2024-11-20 18:34:11.779784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.462 ms 00:24:53.378 [2024-11-20 18:34:11.779793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.378 [2024-11-20 18:34:11.795646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.378 [2024-11-20 18:34:11.795905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:53.378 [2024-11-20 18:34:11.795927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.807 ms 00:24:53.378 [2024-11-20 18:34:11.795938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.378 [2024-11-20 18:34:12.000732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.378 [2024-11-20 18:34:12.000784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:53.378 [2024-11-20 18:34:12.000798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 204.711 ms 00:24:53.378 [2024-11-20 18:34:12.000807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.640 [2024-11-20 18:34:12.026750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.640 [2024-11-20 18:34:12.026920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:53.640 [2024-11-20 18:34:12.026939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.927 ms 00:24:53.640 [2024-11-20 18:34:12.026948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.640 [2024-11-20 18:34:12.051923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.640 [2024-11-20 18:34:12.051979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:53.640 [2024-11-20 18:34:12.051991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.937 ms 00:24:53.640 [2024-11-20 18:34:12.051999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.640 [2024-11-20 18:34:12.076547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.640 [2024-11-20 18:34:12.076590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:53.640 [2024-11-20 18:34:12.076602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.503 ms 00:24:53.640 [2024-11-20 18:34:12.076611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.640 [2024-11-20 18:34:12.101417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.640 [2024-11-20 18:34:12.101469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:53.640 [2024-11-20 18:34:12.101483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.735 ms 00:24:53.640 [2024-11-20 18:34:12.101491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.640 [2024-11-20 18:34:12.101533] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:53.640 [2024-11-20 18:34:12.101548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 107776 / 261120 wr_cnt: 1 state: open 00:24:53.640 [2024-11-20 18:34:12.101559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:53.640 [2024-11-20 18:34:12.101568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:53.640 [2024-11-20 18:34:12.101577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:53.640 [2024-11-20 18:34:12.101585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:53.640 [2024-11-20 18:34:12.101594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:53.640 [2024-11-20 18:34:12.101602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:53.640 [2024-11-20 18:34:12.101610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:53.640 [2024-11-20 18:34:12.101618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:53.640 [2024-11-20 18:34:12.101625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:53.640 [2024-11-20 18:34:12.101633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:53.640 [2024-11-20 18:34:12.101640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:53.640 [2024-11-20 18:34:12.101648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:53.640 [2024-11-20 18:34:12.101656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:53.640 [2024-11-20 18:34:12.101663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:53.640 [2024-11-20 18:34:12.101671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:53.640 [2024-11-20 18:34:12.101678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:53.640 [2024-11-20 18:34:12.101685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:53.640 [2024-11-20 18:34:12.101693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:53.640 [2024-11-20 18:34:12.101700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:53.640 [2024-11-20 18:34:12.101708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:53.640 [2024-11-20 18:34:12.101715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:53.640 [2024-11-20 18:34:12.101722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:53.640 [2024-11-20 18:34:12.101729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:53.640 [2024-11-20 18:34:12.101737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:53.640 [2024-11-20 18:34:12.101744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:53.640 [2024-11-20 18:34:12.101753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:53.640 [2024-11-20 18:34:12.101761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:53.640 [2024-11-20 18:34:12.101769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:53.641 [2024-11-20 18:34:12.101778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:53.641 [2024-11-20 18:34:12.101786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:53.641 [2024-11-20 18:34:12.101793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:53.641 [2024-11-20 18:34:12.101801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:53.641 [2024-11-20 18:34:12.101808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:53.641 [2024-11-20 18:34:12.101816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:53.641 [2024-11-20 18:34:12.101824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:53.641 [2024-11-20 18:34:12.101832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:53.641 [2024-11-20 18:34:12.101839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:53.641 [2024-11-20 18:34:12.101846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:53.641 [2024-11-20 18:34:12.101854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:53.641 [2024-11-20 18:34:12.101862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:53.641 [2024-11-20 18:34:12.101869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:53.641 [2024-11-20 18:34:12.101877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:53.641 [2024-11-20 18:34:12.101885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:53.641 [2024-11-20 18:34:12.101893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:53.641 [2024-11-20 18:34:12.101900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:53.641 [2024-11-20 18:34:12.101908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:53.641 [2024-11-20 18:34:12.101916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:53.641 [2024-11-20 18:34:12.101924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:53.641 [2024-11-20 18:34:12.101932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:53.641 [2024-11-20 18:34:12.101940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:53.641 [2024-11-20 18:34:12.101947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:53.641 [2024-11-20 18:34:12.101954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:53.641 [2024-11-20 18:34:12.101961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:53.641 [2024-11-20 18:34:12.101968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:53.641 [2024-11-20 18:34:12.101977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:53.641 [2024-11-20 18:34:12.101984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:53.641 [2024-11-20 18:34:12.101992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:53.641 [2024-11-20 18:34:12.101999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:53.641 [2024-11-20 18:34:12.102006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:53.641 [2024-11-20 18:34:12.102014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:53.641 [2024-11-20 18:34:12.102029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:53.641 [2024-11-20 18:34:12.102037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:53.641 [2024-11-20 18:34:12.102044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:53.641 [2024-11-20 18:34:12.102052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:53.641 [2024-11-20 18:34:12.102060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:53.641 [2024-11-20 18:34:12.102069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:53.641 [2024-11-20 18:34:12.102077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:53.641 [2024-11-20 18:34:12.102085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:53.641 [2024-11-20 18:34:12.102121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:53.641 [2024-11-20 18:34:12.102130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:53.641 [2024-11-20 18:34:12.102138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:53.641 [2024-11-20 18:34:12.102146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:53.641 [2024-11-20 18:34:12.102155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:53.641 [2024-11-20 18:34:12.102163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:53.641 [2024-11-20 18:34:12.102171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:53.641 [2024-11-20 18:34:12.102179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:53.641 [2024-11-20 18:34:12.102187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:53.641 [2024-11-20 18:34:12.102195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:53.641 [2024-11-20 18:34:12.102202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:53.641 [2024-11-20 18:34:12.102210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:53.641 [2024-11-20 18:34:12.102218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:53.641 [2024-11-20 18:34:12.102226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:53.641 [2024-11-20 18:34:12.102234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:53.641 [2024-11-20 18:34:12.102243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:53.641 [2024-11-20 18:34:12.102251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:53.641 [2024-11-20 18:34:12.102259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:53.641 [2024-11-20 18:34:12.102267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:53.641 [2024-11-20 18:34:12.102275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:53.641 [2024-11-20 18:34:12.102282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:53.641 [2024-11-20 18:34:12.102290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:53.641 [2024-11-20 18:34:12.102297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:53.641 [2024-11-20 18:34:12.102304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:53.641 [2024-11-20 18:34:12.102319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:53.641 [2024-11-20 18:34:12.102328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:53.641 [2024-11-20 18:34:12.102336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:53.641 [2024-11-20 18:34:12.102343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:53.641 [2024-11-20 18:34:12.102351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:53.641 [2024-11-20 18:34:12.102359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:53.641 [2024-11-20 18:34:12.102366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:53.641 [2024-11-20 18:34:12.102383] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:53.641 [2024-11-20 18:34:12.102392] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d1c29b40-f180-4470-a535-460b563a5775 00:24:53.641 [2024-11-20 18:34:12.102401] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 107776 00:24:53.641 [2024-11-20 18:34:12.102409] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 108736 00:24:53.641 [2024-11-20 18:34:12.102417] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 107776 00:24:53.641 [2024-11-20 18:34:12.102426] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0089 00:24:53.641 [2024-11-20 18:34:12.102434] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:53.641 [2024-11-20 18:34:12.102449] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:53.641 [2024-11-20 18:34:12.102463] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:53.641 [2024-11-20 18:34:12.102470] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:53.641 [2024-11-20 18:34:12.102477] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:53.641 [2024-11-20 18:34:12.102484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.641 [2024-11-20 18:34:12.102492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:53.641 [2024-11-20 18:34:12.102501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.952 ms 00:24:53.641 [2024-11-20 18:34:12.102511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.641 [2024-11-20 18:34:12.116039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.641 [2024-11-20 18:34:12.116075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:53.641 [2024-11-20 18:34:12.116088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.508 ms 00:24:53.642 [2024-11-20 18:34:12.116118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.642 [2024-11-20 18:34:12.116534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.642 [2024-11-20 18:34:12.116557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:53.642 [2024-11-20 18:34:12.116567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.380 ms 00:24:53.642 [2024-11-20 18:34:12.116574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.642 [2024-11-20 18:34:12.152880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:53.642 [2024-11-20 18:34:12.152927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:53.642 [2024-11-20 18:34:12.152945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:53.642 [2024-11-20 18:34:12.152954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.642 [2024-11-20 18:34:12.153027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:53.642 [2024-11-20 18:34:12.153037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:53.642 [2024-11-20 18:34:12.153047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:53.642 [2024-11-20 18:34:12.153056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.642 [2024-11-20 18:34:12.153145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:53.642 [2024-11-20 18:34:12.153158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:53.642 [2024-11-20 18:34:12.153168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:53.642 [2024-11-20 18:34:12.153182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.642 [2024-11-20 18:34:12.153200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:53.642 [2024-11-20 18:34:12.153209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:53.642 [2024-11-20 18:34:12.153219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:53.642 [2024-11-20 18:34:12.153228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.642 [2024-11-20 18:34:12.236803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:53.642 [2024-11-20 18:34:12.236856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:53.642 [2024-11-20 18:34:12.236876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:53.642 [2024-11-20 18:34:12.236885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.903 [2024-11-20 18:34:12.305749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:53.903 [2024-11-20 18:34:12.305801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:53.903 [2024-11-20 18:34:12.305814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:53.903 [2024-11-20 18:34:12.305823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.903 [2024-11-20 18:34:12.305905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:53.903 [2024-11-20 18:34:12.305916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:53.903 [2024-11-20 18:34:12.305925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:53.903 [2024-11-20 18:34:12.305933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.903 [2024-11-20 18:34:12.305977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:53.903 [2024-11-20 18:34:12.305986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:53.903 [2024-11-20 18:34:12.305995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:53.903 [2024-11-20 18:34:12.306004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.903 [2024-11-20 18:34:12.306130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:53.903 [2024-11-20 18:34:12.306143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:53.903 [2024-11-20 18:34:12.306152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:53.903 [2024-11-20 18:34:12.306160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.903 [2024-11-20 18:34:12.306195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:53.903 [2024-11-20 18:34:12.306206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:53.903 [2024-11-20 18:34:12.306214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:53.903 [2024-11-20 18:34:12.306223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.903 [2024-11-20 18:34:12.306265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:53.903 [2024-11-20 18:34:12.306275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:53.903 [2024-11-20 18:34:12.306284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:53.903 [2024-11-20 18:34:12.306292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.903 [2024-11-20 18:34:12.306345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:53.903 [2024-11-20 18:34:12.306356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:53.903 [2024-11-20 18:34:12.306365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:53.903 [2024-11-20 18:34:12.306373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.903 [2024-11-20 18:34:12.306509] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 605.952 ms, result 0 00:24:55.291 00:24:55.291 00:24:55.291 18:34:13 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:24:55.291 [2024-11-20 18:34:13.768060] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:24:55.291 [2024-11-20 18:34:13.768227] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79520 ] 00:24:55.552 [2024-11-20 18:34:13.931008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:55.552 [2024-11-20 18:34:14.053326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:55.813 [2024-11-20 18:34:14.341296] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:55.813 [2024-11-20 18:34:14.341370] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:56.075 [2024-11-20 18:34:14.502640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.075 [2024-11-20 18:34:14.502858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:56.075 [2024-11-20 18:34:14.502891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:56.075 [2024-11-20 18:34:14.502901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.075 [2024-11-20 18:34:14.502968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.075 [2024-11-20 18:34:14.502979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:56.075 [2024-11-20 18:34:14.502991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:24:56.075 [2024-11-20 18:34:14.502999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.075 [2024-11-20 18:34:14.503022] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:56.075 [2024-11-20 18:34:14.503735] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:56.075 [2024-11-20 18:34:14.503756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.075 [2024-11-20 18:34:14.503764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:56.075 [2024-11-20 18:34:14.503774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.740 ms 00:24:56.075 [2024-11-20 18:34:14.503782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.075 [2024-11-20 18:34:14.505442] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:56.075 [2024-11-20 18:34:14.519719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.075 [2024-11-20 18:34:14.519765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:56.075 [2024-11-20 18:34:14.519780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.279 ms 00:24:56.075 [2024-11-20 18:34:14.519788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.075 [2024-11-20 18:34:14.519868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.075 [2024-11-20 18:34:14.519878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:56.075 [2024-11-20 18:34:14.519887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:24:56.075 [2024-11-20 18:34:14.519895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.075 [2024-11-20 18:34:14.527912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.075 [2024-11-20 18:34:14.527954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:56.075 [2024-11-20 18:34:14.527983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.942 ms 00:24:56.075 [2024-11-20 18:34:14.527991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.075 [2024-11-20 18:34:14.528075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.075 [2024-11-20 18:34:14.528085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:56.075 [2024-11-20 18:34:14.528115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:24:56.075 [2024-11-20 18:34:14.528124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.075 [2024-11-20 18:34:14.528168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.075 [2024-11-20 18:34:14.528179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:56.075 [2024-11-20 18:34:14.528188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:56.075 [2024-11-20 18:34:14.528196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.075 [2024-11-20 18:34:14.528219] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:56.075 [2024-11-20 18:34:14.532251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.075 [2024-11-20 18:34:14.532288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:56.075 [2024-11-20 18:34:14.532299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.037 ms 00:24:56.075 [2024-11-20 18:34:14.532311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.075 [2024-11-20 18:34:14.532348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.075 [2024-11-20 18:34:14.532357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:56.075 [2024-11-20 18:34:14.532366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:24:56.075 [2024-11-20 18:34:14.532373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.075 [2024-11-20 18:34:14.532424] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:56.075 [2024-11-20 18:34:14.532447] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:56.075 [2024-11-20 18:34:14.532485] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:56.075 [2024-11-20 18:34:14.532504] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:56.076 [2024-11-20 18:34:14.532609] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:56.076 [2024-11-20 18:34:14.532620] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:56.076 [2024-11-20 18:34:14.532631] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:56.076 [2024-11-20 18:34:14.532643] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:56.076 [2024-11-20 18:34:14.532652] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:56.076 [2024-11-20 18:34:14.532661] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:56.076 [2024-11-20 18:34:14.532669] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:56.076 [2024-11-20 18:34:14.532677] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:56.076 [2024-11-20 18:34:14.532685] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:56.076 [2024-11-20 18:34:14.532696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.076 [2024-11-20 18:34:14.532704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:56.076 [2024-11-20 18:34:14.532712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:24:56.076 [2024-11-20 18:34:14.532719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.076 [2024-11-20 18:34:14.532802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.076 [2024-11-20 18:34:14.532811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:56.076 [2024-11-20 18:34:14.532819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:24:56.076 [2024-11-20 18:34:14.532826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.076 [2024-11-20 18:34:14.532931] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:56.076 [2024-11-20 18:34:14.532945] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:56.076 [2024-11-20 18:34:14.532954] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:56.076 [2024-11-20 18:34:14.532961] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:56.076 [2024-11-20 18:34:14.532969] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:56.076 [2024-11-20 18:34:14.532977] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:56.076 [2024-11-20 18:34:14.532984] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:56.076 [2024-11-20 18:34:14.532992] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:56.076 [2024-11-20 18:34:14.533000] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:56.076 [2024-11-20 18:34:14.533007] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:56.076 [2024-11-20 18:34:14.533014] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:56.076 [2024-11-20 18:34:14.533021] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:56.076 [2024-11-20 18:34:14.533027] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:56.076 [2024-11-20 18:34:14.533036] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:56.076 [2024-11-20 18:34:14.533044] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:56.076 [2024-11-20 18:34:14.533058] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:56.076 [2024-11-20 18:34:14.533065] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:56.076 [2024-11-20 18:34:14.533071] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:56.076 [2024-11-20 18:34:14.533078] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:56.076 [2024-11-20 18:34:14.533084] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:56.076 [2024-11-20 18:34:14.533115] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:56.076 [2024-11-20 18:34:14.533123] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:56.076 [2024-11-20 18:34:14.533130] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:56.076 [2024-11-20 18:34:14.533137] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:56.076 [2024-11-20 18:34:14.533143] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:56.076 [2024-11-20 18:34:14.533150] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:56.076 [2024-11-20 18:34:14.533157] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:56.076 [2024-11-20 18:34:14.533164] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:56.076 [2024-11-20 18:34:14.533171] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:56.076 [2024-11-20 18:34:14.533178] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:56.076 [2024-11-20 18:34:14.533185] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:56.076 [2024-11-20 18:34:14.533192] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:56.076 [2024-11-20 18:34:14.533199] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:56.076 [2024-11-20 18:34:14.533206] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:56.076 [2024-11-20 18:34:14.533212] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:56.076 [2024-11-20 18:34:14.533219] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:56.076 [2024-11-20 18:34:14.533225] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:56.076 [2024-11-20 18:34:14.533234] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:56.076 [2024-11-20 18:34:14.533241] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:56.076 [2024-11-20 18:34:14.533248] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:56.076 [2024-11-20 18:34:14.533255] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:56.076 [2024-11-20 18:34:14.533261] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:56.076 [2024-11-20 18:34:14.533268] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:56.076 [2024-11-20 18:34:14.533276] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:56.076 [2024-11-20 18:34:14.533284] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:56.076 [2024-11-20 18:34:14.533293] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:56.076 [2024-11-20 18:34:14.533301] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:56.076 [2024-11-20 18:34:14.533309] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:56.076 [2024-11-20 18:34:14.533316] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:56.076 [2024-11-20 18:34:14.533323] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:56.076 [2024-11-20 18:34:14.533329] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:56.076 [2024-11-20 18:34:14.533336] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:56.076 [2024-11-20 18:34:14.533343] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:56.076 [2024-11-20 18:34:14.533352] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:56.076 [2024-11-20 18:34:14.533361] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:56.076 [2024-11-20 18:34:14.533369] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:56.076 [2024-11-20 18:34:14.533376] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:56.076 [2024-11-20 18:34:14.533383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:56.076 [2024-11-20 18:34:14.533390] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:56.076 [2024-11-20 18:34:14.533397] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:56.076 [2024-11-20 18:34:14.533405] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:56.076 [2024-11-20 18:34:14.533412] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:56.076 [2024-11-20 18:34:14.533432] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:56.076 [2024-11-20 18:34:14.533439] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:56.076 [2024-11-20 18:34:14.533446] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:56.076 [2024-11-20 18:34:14.533454] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:56.076 [2024-11-20 18:34:14.533460] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:56.076 [2024-11-20 18:34:14.533479] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:56.076 [2024-11-20 18:34:14.533486] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:56.076 [2024-11-20 18:34:14.533493] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:56.076 [2024-11-20 18:34:14.533504] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:56.076 [2024-11-20 18:34:14.533512] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:56.076 [2024-11-20 18:34:14.533519] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:56.076 [2024-11-20 18:34:14.533526] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:56.076 [2024-11-20 18:34:14.533533] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:56.076 [2024-11-20 18:34:14.533542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.076 [2024-11-20 18:34:14.533550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:56.076 [2024-11-20 18:34:14.533558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.679 ms 00:24:56.077 [2024-11-20 18:34:14.533566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.077 [2024-11-20 18:34:14.565083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.077 [2024-11-20 18:34:14.565145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:56.077 [2024-11-20 18:34:14.565158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.472 ms 00:24:56.077 [2024-11-20 18:34:14.565166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.077 [2024-11-20 18:34:14.565254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.077 [2024-11-20 18:34:14.565263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:56.077 [2024-11-20 18:34:14.565272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:24:56.077 [2024-11-20 18:34:14.565280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.077 [2024-11-20 18:34:14.611543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.077 [2024-11-20 18:34:14.611731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:56.077 [2024-11-20 18:34:14.611753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.205 ms 00:24:56.077 [2024-11-20 18:34:14.611763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.077 [2024-11-20 18:34:14.611813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.077 [2024-11-20 18:34:14.611823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:56.077 [2024-11-20 18:34:14.611833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:56.077 [2024-11-20 18:34:14.611845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.077 [2024-11-20 18:34:14.612451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.077 [2024-11-20 18:34:14.612473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:56.077 [2024-11-20 18:34:14.612486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.527 ms 00:24:56.077 [2024-11-20 18:34:14.612494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.077 [2024-11-20 18:34:14.612650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.077 [2024-11-20 18:34:14.612661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:56.077 [2024-11-20 18:34:14.612670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:24:56.077 [2024-11-20 18:34:14.612684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.077 [2024-11-20 18:34:14.628155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.077 [2024-11-20 18:34:14.628197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:56.077 [2024-11-20 18:34:14.628211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.451 ms 00:24:56.077 [2024-11-20 18:34:14.628219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.077 [2024-11-20 18:34:14.642136] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:24:56.077 [2024-11-20 18:34:14.642306] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:56.077 [2024-11-20 18:34:14.642326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.077 [2024-11-20 18:34:14.642334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:56.077 [2024-11-20 18:34:14.642344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.001 ms 00:24:56.077 [2024-11-20 18:34:14.642351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.077 [2024-11-20 18:34:14.667767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.077 [2024-11-20 18:34:14.667820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:56.077 [2024-11-20 18:34:14.667832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.370 ms 00:24:56.077 [2024-11-20 18:34:14.667840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.077 [2024-11-20 18:34:14.680602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.077 [2024-11-20 18:34:14.680653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:56.077 [2024-11-20 18:34:14.680664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.709 ms 00:24:56.077 [2024-11-20 18:34:14.680672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.077 [2024-11-20 18:34:14.692931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.077 [2024-11-20 18:34:14.693106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:56.077 [2024-11-20 18:34:14.693127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.211 ms 00:24:56.077 [2024-11-20 18:34:14.693135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.077 [2024-11-20 18:34:14.693779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.077 [2024-11-20 18:34:14.693807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:56.077 [2024-11-20 18:34:14.693818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.543 ms 00:24:56.077 [2024-11-20 18:34:14.693829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.337 [2024-11-20 18:34:14.757967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.337 [2024-11-20 18:34:14.758026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:56.337 [2024-11-20 18:34:14.758050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.118 ms 00:24:56.337 [2024-11-20 18:34:14.758059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.337 [2024-11-20 18:34:14.769038] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:56.337 [2024-11-20 18:34:14.772215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.337 [2024-11-20 18:34:14.772257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:56.337 [2024-11-20 18:34:14.772270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.072 ms 00:24:56.337 [2024-11-20 18:34:14.772280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.337 [2024-11-20 18:34:14.772370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.337 [2024-11-20 18:34:14.772381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:56.337 [2024-11-20 18:34:14.772392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:24:56.337 [2024-11-20 18:34:14.772403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.337 [2024-11-20 18:34:14.774162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.337 [2024-11-20 18:34:14.774205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:56.337 [2024-11-20 18:34:14.774216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.720 ms 00:24:56.337 [2024-11-20 18:34:14.774225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.337 [2024-11-20 18:34:14.774254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.337 [2024-11-20 18:34:14.774265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:56.337 [2024-11-20 18:34:14.774274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:56.337 [2024-11-20 18:34:14.774282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.337 [2024-11-20 18:34:14.774324] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:56.337 [2024-11-20 18:34:14.774338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.337 [2024-11-20 18:34:14.774346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:56.337 [2024-11-20 18:34:14.774355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:24:56.337 [2024-11-20 18:34:14.774363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.337 [2024-11-20 18:34:14.799746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.337 [2024-11-20 18:34:14.799914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:56.337 [2024-11-20 18:34:14.799936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.364 ms 00:24:56.337 [2024-11-20 18:34:14.799953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.337 [2024-11-20 18:34:14.800034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:56.337 [2024-11-20 18:34:14.800044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:56.337 [2024-11-20 18:34:14.800053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:24:56.337 [2024-11-20 18:34:14.800060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.337 [2024-11-20 18:34:14.801343] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 298.193 ms, result 0 00:24:57.722  [2024-11-20T18:34:17.294Z] Copying: 15/1024 [MB] (15 MBps) [2024-11-20T18:34:18.238Z] Copying: 34/1024 [MB] (18 MBps) [2024-11-20T18:34:19.183Z] Copying: 45/1024 [MB] (10 MBps) [2024-11-20T18:34:20.130Z] Copying: 63/1024 [MB] (18 MBps) [2024-11-20T18:34:21.105Z] Copying: 74/1024 [MB] (11 MBps) [2024-11-20T18:34:22.060Z] Copying: 85/1024 [MB] (10 MBps) [2024-11-20T18:34:23.004Z] Copying: 96/1024 [MB] (11 MBps) [2024-11-20T18:34:24.391Z] Copying: 108/1024 [MB] (11 MBps) [2024-11-20T18:34:25.335Z] Copying: 120/1024 [MB] (12 MBps) [2024-11-20T18:34:26.278Z] Copying: 133/1024 [MB] (12 MBps) [2024-11-20T18:34:27.223Z] Copying: 144/1024 [MB] (11 MBps) [2024-11-20T18:34:28.168Z] Copying: 154/1024 [MB] (10 MBps) [2024-11-20T18:34:29.111Z] Copying: 167/1024 [MB] (12 MBps) [2024-11-20T18:34:30.057Z] Copying: 179/1024 [MB] (12 MBps) [2024-11-20T18:34:30.999Z] Copying: 191/1024 [MB] (11 MBps) [2024-11-20T18:34:32.384Z] Copying: 201/1024 [MB] (10 MBps) [2024-11-20T18:34:33.324Z] Copying: 213/1024 [MB] (12 MBps) [2024-11-20T18:34:34.269Z] Copying: 225/1024 [MB] (12 MBps) [2024-11-20T18:34:35.212Z] Copying: 236/1024 [MB] (10 MBps) [2024-11-20T18:34:36.154Z] Copying: 247/1024 [MB] (10 MBps) [2024-11-20T18:34:37.097Z] Copying: 257/1024 [MB] (10 MBps) [2024-11-20T18:34:38.040Z] Copying: 269/1024 [MB] (11 MBps) [2024-11-20T18:34:39.426Z] Copying: 281/1024 [MB] (11 MBps) [2024-11-20T18:34:39.997Z] Copying: 293/1024 [MB] (11 MBps) [2024-11-20T18:34:41.384Z] Copying: 304/1024 [MB] (11 MBps) [2024-11-20T18:34:42.326Z] Copying: 316/1024 [MB] (11 MBps) [2024-11-20T18:34:43.269Z] Copying: 327/1024 [MB] (11 MBps) [2024-11-20T18:34:44.209Z] Copying: 339/1024 [MB] (12 MBps) [2024-11-20T18:34:45.152Z] Copying: 350/1024 [MB] (10 MBps) [2024-11-20T18:34:46.096Z] Copying: 360/1024 [MB] (10 MBps) [2024-11-20T18:34:47.038Z] Copying: 370/1024 [MB] (10 MBps) [2024-11-20T18:34:48.423Z] Copying: 390/1024 [MB] (19 MBps) [2024-11-20T18:34:48.997Z] Copying: 401/1024 [MB] (10 MBps) [2024-11-20T18:34:50.430Z] Copying: 413/1024 [MB] (12 MBps) [2024-11-20T18:34:51.008Z] Copying: 425/1024 [MB] (11 MBps) [2024-11-20T18:34:52.394Z] Copying: 438/1024 [MB] (13 MBps) [2024-11-20T18:34:53.339Z] Copying: 449/1024 [MB] (11 MBps) [2024-11-20T18:34:54.282Z] Copying: 461/1024 [MB] (12 MBps) [2024-11-20T18:34:55.228Z] Copying: 476/1024 [MB] (14 MBps) [2024-11-20T18:34:56.173Z] Copying: 489/1024 [MB] (13 MBps) [2024-11-20T18:34:57.119Z] Copying: 502/1024 [MB] (12 MBps) [2024-11-20T18:34:58.062Z] Copying: 522/1024 [MB] (19 MBps) [2024-11-20T18:34:59.006Z] Copying: 541/1024 [MB] (19 MBps) [2024-11-20T18:35:00.394Z] Copying: 562/1024 [MB] (20 MBps) [2024-11-20T18:35:01.338Z] Copying: 575/1024 [MB] (13 MBps) [2024-11-20T18:35:02.281Z] Copying: 587/1024 [MB] (11 MBps) [2024-11-20T18:35:03.225Z] Copying: 600/1024 [MB] (13 MBps) [2024-11-20T18:35:04.168Z] Copying: 610/1024 [MB] (10 MBps) [2024-11-20T18:35:05.112Z] Copying: 625/1024 [MB] (15 MBps) [2024-11-20T18:35:06.054Z] Copying: 639/1024 [MB] (13 MBps) [2024-11-20T18:35:06.998Z] Copying: 651/1024 [MB] (12 MBps) [2024-11-20T18:35:08.383Z] Copying: 661/1024 [MB] (10 MBps) [2024-11-20T18:35:09.325Z] Copying: 677/1024 [MB] (15 MBps) [2024-11-20T18:35:10.270Z] Copying: 693/1024 [MB] (15 MBps) [2024-11-20T18:35:11.214Z] Copying: 707/1024 [MB] (14 MBps) [2024-11-20T18:35:12.156Z] Copying: 721/1024 [MB] (14 MBps) [2024-11-20T18:35:13.099Z] Copying: 737/1024 [MB] (15 MBps) [2024-11-20T18:35:14.044Z] Copying: 748/1024 [MB] (10 MBps) [2024-11-20T18:35:15.433Z] Copying: 758/1024 [MB] (10 MBps) [2024-11-20T18:35:16.005Z] Copying: 769/1024 [MB] (10 MBps) [2024-11-20T18:35:17.394Z] Copying: 779/1024 [MB] (10 MBps) [2024-11-20T18:35:18.339Z] Copying: 790/1024 [MB] (10 MBps) [2024-11-20T18:35:19.356Z] Copying: 801/1024 [MB] (10 MBps) [2024-11-20T18:35:20.327Z] Copying: 811/1024 [MB] (10 MBps) [2024-11-20T18:35:21.271Z] Copying: 822/1024 [MB] (10 MBps) [2024-11-20T18:35:22.214Z] Copying: 833/1024 [MB] (11 MBps) [2024-11-20T18:35:23.160Z] Copying: 844/1024 [MB] (11 MBps) [2024-11-20T18:35:24.103Z] Copying: 862/1024 [MB] (17 MBps) [2024-11-20T18:35:25.047Z] Copying: 876/1024 [MB] (14 MBps) [2024-11-20T18:35:26.436Z] Copying: 891/1024 [MB] (15 MBps) [2024-11-20T18:35:27.009Z] Copying: 903/1024 [MB] (11 MBps) [2024-11-20T18:35:28.395Z] Copying: 914/1024 [MB] (10 MBps) [2024-11-20T18:35:29.336Z] Copying: 925/1024 [MB] (11 MBps) [2024-11-20T18:35:30.281Z] Copying: 936/1024 [MB] (10 MBps) [2024-11-20T18:35:31.225Z] Copying: 946/1024 [MB] (10 MBps) [2024-11-20T18:35:32.166Z] Copying: 957/1024 [MB] (10 MBps) [2024-11-20T18:35:33.109Z] Copying: 968/1024 [MB] (10 MBps) [2024-11-20T18:35:34.052Z] Copying: 986/1024 [MB] (18 MBps) [2024-11-20T18:35:34.623Z] Copying: 1005/1024 [MB] (18 MBps) [2024-11-20T18:35:34.883Z] Copying: 1024/1024 [MB] (average 12 MBps)[2024-11-20 18:35:34.665676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.254 [2024-11-20 18:35:34.666175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:16.254 [2024-11-20 18:35:34.666279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:16.254 [2024-11-20 18:35:34.666312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.254 [2024-11-20 18:35:34.666411] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:16.254 [2024-11-20 18:35:34.670332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.254 [2024-11-20 18:35:34.670472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:16.254 [2024-11-20 18:35:34.670546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.865 ms 00:26:16.254 [2024-11-20 18:35:34.670575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.254 [2024-11-20 18:35:34.670904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.254 [2024-11-20 18:35:34.670994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:16.254 [2024-11-20 18:35:34.671060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.246 ms 00:26:16.254 [2024-11-20 18:35:34.671105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.254 [2024-11-20 18:35:34.675103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.254 [2024-11-20 18:35:34.675203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:16.254 [2024-11-20 18:35:34.675253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.759 ms 00:26:16.254 [2024-11-20 18:35:34.675271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.254 [2024-11-20 18:35:34.680109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.254 [2024-11-20 18:35:34.680204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:16.254 [2024-11-20 18:35:34.680248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.803 ms 00:26:16.254 [2024-11-20 18:35:34.680266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.254 [2024-11-20 18:35:34.699046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.254 [2024-11-20 18:35:34.699154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:16.254 [2024-11-20 18:35:34.699197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.694 ms 00:26:16.254 [2024-11-20 18:35:34.699214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.254 [2024-11-20 18:35:34.710414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.254 [2024-11-20 18:35:34.710523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:16.254 [2024-11-20 18:35:34.710565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.167 ms 00:26:16.254 [2024-11-20 18:35:34.710582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.254 [2024-11-20 18:35:34.790105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.254 [2024-11-20 18:35:34.790195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:16.254 [2024-11-20 18:35:34.790241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 79.489 ms 00:26:16.254 [2024-11-20 18:35:34.790259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.254 [2024-11-20 18:35:34.808091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.254 [2024-11-20 18:35:34.808180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:16.254 [2024-11-20 18:35:34.808219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.810 ms 00:26:16.254 [2024-11-20 18:35:34.808236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.254 [2024-11-20 18:35:34.825499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.254 [2024-11-20 18:35:34.825583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:16.254 [2024-11-20 18:35:34.825629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.234 ms 00:26:16.254 [2024-11-20 18:35:34.825645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.254 [2024-11-20 18:35:34.842705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.254 [2024-11-20 18:35:34.842789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:16.254 [2024-11-20 18:35:34.842827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.030 ms 00:26:16.254 [2024-11-20 18:35:34.842843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.254 [2024-11-20 18:35:34.860003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.254 [2024-11-20 18:35:34.860091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:16.254 [2024-11-20 18:35:34.860141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.114 ms 00:26:16.254 [2024-11-20 18:35:34.860158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.254 [2024-11-20 18:35:34.860179] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:16.254 [2024-11-20 18:35:34.860189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:26:16.254 [2024-11-20 18:35:34.860198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:16.254 [2024-11-20 18:35:34.860204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:16.254 [2024-11-20 18:35:34.860210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:16.254 [2024-11-20 18:35:34.860216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:16.254 [2024-11-20 18:35:34.860222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:16.254 [2024-11-20 18:35:34.860228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:16.254 [2024-11-20 18:35:34.860233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:16.254 [2024-11-20 18:35:34.860239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:16.254 [2024-11-20 18:35:34.860245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:16.254 [2024-11-20 18:35:34.860252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:16.254 [2024-11-20 18:35:34.860261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:16.254 [2024-11-20 18:35:34.860267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:16.254 [2024-11-20 18:35:34.860274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:16.254 [2024-11-20 18:35:34.860280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:16.254 [2024-11-20 18:35:34.860286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:16.254 [2024-11-20 18:35:34.860291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:16.254 [2024-11-20 18:35:34.860297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:16.254 [2024-11-20 18:35:34.860303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:16.254 [2024-11-20 18:35:34.860309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:16.254 [2024-11-20 18:35:34.860315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:16.254 [2024-11-20 18:35:34.860321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:16.255 [2024-11-20 18:35:34.860326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:16.255 [2024-11-20 18:35:34.860332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:16.255 [2024-11-20 18:35:34.860338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:16.255 [2024-11-20 18:35:34.860344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:16.255 [2024-11-20 18:35:34.860350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:16.255 [2024-11-20 18:35:34.860356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:16.255 [2024-11-20 18:35:34.860363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:16.255 [2024-11-20 18:35:34.860369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:16.255 [2024-11-20 18:35:34.860375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:16.255 [2024-11-20 18:35:34.860380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:16.255 [2024-11-20 18:35:34.860386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:16.255 [2024-11-20 18:35:34.860392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:16.255 [2024-11-20 18:35:34.860399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:16.255 [2024-11-20 18:35:34.860404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:16.255 [2024-11-20 18:35:34.860410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:16.255 [2024-11-20 18:35:34.860416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:16.255 [2024-11-20 18:35:34.860422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:16.255 [2024-11-20 18:35:34.860427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:16.255 [2024-11-20 18:35:34.860433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:16.255 [2024-11-20 18:35:34.860439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:16.255 [2024-11-20 18:35:34.860445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:16.255 [2024-11-20 18:35:34.860451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:16.255 [2024-11-20 18:35:34.860456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:16.255 [2024-11-20 18:35:34.860462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:16.255 [2024-11-20 18:35:34.860467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:16.255 [2024-11-20 18:35:34.860473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:16.255 [2024-11-20 18:35:34.860478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:16.255 [2024-11-20 18:35:34.860484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:16.255 [2024-11-20 18:35:34.860489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:16.255 [2024-11-20 18:35:34.860495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:16.255 [2024-11-20 18:35:34.860500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:16.255 [2024-11-20 18:35:34.860506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:16.255 [2024-11-20 18:35:34.860512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:16.255 [2024-11-20 18:35:34.860518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:16.255 [2024-11-20 18:35:34.860523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:16.255 [2024-11-20 18:35:34.860529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:16.255 [2024-11-20 18:35:34.860535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:16.255 [2024-11-20 18:35:34.860542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:16.255 [2024-11-20 18:35:34.860547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:16.255 [2024-11-20 18:35:34.860554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:16.255 [2024-11-20 18:35:34.860560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:16.255 [2024-11-20 18:35:34.860565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:16.255 [2024-11-20 18:35:34.860571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:16.255 [2024-11-20 18:35:34.860576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:16.255 [2024-11-20 18:35:34.860582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:16.255 [2024-11-20 18:35:34.860587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:16.255 [2024-11-20 18:35:34.860593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:16.255 [2024-11-20 18:35:34.860599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:16.255 [2024-11-20 18:35:34.860605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:16.255 [2024-11-20 18:35:34.860611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:16.255 [2024-11-20 18:35:34.860617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:16.255 [2024-11-20 18:35:34.860623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:16.255 [2024-11-20 18:35:34.860629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:16.255 [2024-11-20 18:35:34.860634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:16.255 [2024-11-20 18:35:34.860640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:16.255 [2024-11-20 18:35:34.860646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:16.255 [2024-11-20 18:35:34.860651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:16.255 [2024-11-20 18:35:34.860657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:16.255 [2024-11-20 18:35:34.860662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:16.255 [2024-11-20 18:35:34.860669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:16.255 [2024-11-20 18:35:34.860675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:16.255 [2024-11-20 18:35:34.860681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:16.255 [2024-11-20 18:35:34.860686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:16.255 [2024-11-20 18:35:34.860692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:16.255 [2024-11-20 18:35:34.860698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:16.255 [2024-11-20 18:35:34.860703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:16.255 [2024-11-20 18:35:34.860709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:16.255 [2024-11-20 18:35:34.860714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:16.255 [2024-11-20 18:35:34.860720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:16.255 [2024-11-20 18:35:34.860726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:16.255 [2024-11-20 18:35:34.860731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:16.255 [2024-11-20 18:35:34.860738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:16.255 [2024-11-20 18:35:34.860744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:16.255 [2024-11-20 18:35:34.860750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:16.255 [2024-11-20 18:35:34.860755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:16.255 [2024-11-20 18:35:34.860761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:16.255 [2024-11-20 18:35:34.860767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:16.255 [2024-11-20 18:35:34.860772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:16.255 [2024-11-20 18:35:34.860784] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:16.255 [2024-11-20 18:35:34.860790] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d1c29b40-f180-4470-a535-460b563a5775 00:26:16.255 [2024-11-20 18:35:34.860796] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:26:16.255 [2024-11-20 18:35:34.860802] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 24256 00:26:16.255 [2024-11-20 18:35:34.860807] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 23296 00:26:16.255 [2024-11-20 18:35:34.860814] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0412 00:26:16.255 [2024-11-20 18:35:34.860820] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:16.255 [2024-11-20 18:35:34.860828] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:16.255 [2024-11-20 18:35:34.860834] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:16.255 [2024-11-20 18:35:34.860843] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:16.255 [2024-11-20 18:35:34.860848] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:16.255 [2024-11-20 18:35:34.860854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.255 [2024-11-20 18:35:34.860860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:16.256 [2024-11-20 18:35:34.860866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.675 ms 00:26:16.256 [2024-11-20 18:35:34.860872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.256 [2024-11-20 18:35:34.870636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.256 [2024-11-20 18:35:34.870723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:16.256 [2024-11-20 18:35:34.870761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.752 ms 00:26:16.256 [2024-11-20 18:35:34.870782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.256 [2024-11-20 18:35:34.871057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.256 [2024-11-20 18:35:34.871124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:16.256 [2024-11-20 18:35:34.871165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.252 ms 00:26:16.256 [2024-11-20 18:35:34.871182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.514 [2024-11-20 18:35:34.897358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:16.514 [2024-11-20 18:35:34.897448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:16.514 [2024-11-20 18:35:34.897491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:16.514 [2024-11-20 18:35:34.897508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.514 [2024-11-20 18:35:34.897556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:16.514 [2024-11-20 18:35:34.897572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:16.514 [2024-11-20 18:35:34.897587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:16.515 [2024-11-20 18:35:34.897600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.515 [2024-11-20 18:35:34.897650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:16.515 [2024-11-20 18:35:34.897669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:16.515 [2024-11-20 18:35:34.897685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:16.515 [2024-11-20 18:35:34.897727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.515 [2024-11-20 18:35:34.897773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:16.515 [2024-11-20 18:35:34.897792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:16.515 [2024-11-20 18:35:34.897825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:16.515 [2024-11-20 18:35:34.897842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.515 [2024-11-20 18:35:34.957416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:16.515 [2024-11-20 18:35:34.957534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:16.515 [2024-11-20 18:35:34.957577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:16.515 [2024-11-20 18:35:34.957593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.515 [2024-11-20 18:35:35.007089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:16.515 [2024-11-20 18:35:35.007204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:16.515 [2024-11-20 18:35:35.007241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:16.515 [2024-11-20 18:35:35.007258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.515 [2024-11-20 18:35:35.007302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:16.515 [2024-11-20 18:35:35.007318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:16.515 [2024-11-20 18:35:35.007333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:16.515 [2024-11-20 18:35:35.007348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.515 [2024-11-20 18:35:35.007403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:16.515 [2024-11-20 18:35:35.007422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:16.515 [2024-11-20 18:35:35.007438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:16.515 [2024-11-20 18:35:35.007484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.515 [2024-11-20 18:35:35.007562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:16.515 [2024-11-20 18:35:35.007570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:16.515 [2024-11-20 18:35:35.007577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:16.515 [2024-11-20 18:35:35.007583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.515 [2024-11-20 18:35:35.007607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:16.515 [2024-11-20 18:35:35.007614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:16.515 [2024-11-20 18:35:35.007620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:16.515 [2024-11-20 18:35:35.007625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.515 [2024-11-20 18:35:35.007652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:16.515 [2024-11-20 18:35:35.007658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:16.515 [2024-11-20 18:35:35.007665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:16.515 [2024-11-20 18:35:35.007671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.515 [2024-11-20 18:35:35.007705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:16.515 [2024-11-20 18:35:35.007712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:16.515 [2024-11-20 18:35:35.007718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:16.515 [2024-11-20 18:35:35.007724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.515 [2024-11-20 18:35:35.007813] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 342.123 ms, result 0 00:26:17.083 00:26:17.083 00:26:17.083 18:35:35 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:19.622 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:26:19.622 18:35:37 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:26:19.622 18:35:37 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:26:19.622 18:35:37 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:19.622 18:35:37 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:19.622 18:35:37 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:19.622 Process with pid 77063 is not found 00:26:19.622 Remove shared memory files 00:26:19.622 18:35:37 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 77063 00:26:19.622 18:35:37 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 77063 ']' 00:26:19.622 18:35:37 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 77063 00:26:19.622 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (77063) - No such process 00:26:19.622 18:35:37 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 77063 is not found' 00:26:19.622 18:35:37 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:26:19.622 18:35:37 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:26:19.622 18:35:37 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:26:19.622 18:35:37 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:26:19.622 18:35:37 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:26:19.622 18:35:37 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:26:19.622 18:35:37 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:26:19.622 ************************************ 00:26:19.622 END TEST ftl_restore 00:26:19.622 ************************************ 00:26:19.622 00:26:19.622 real 5m21.312s 00:26:19.622 user 5m8.810s 00:26:19.622 sys 0m12.106s 00:26:19.622 18:35:37 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:19.622 18:35:37 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:26:19.622 18:35:37 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:26:19.622 18:35:37 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:26:19.622 18:35:37 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:19.622 18:35:37 ftl -- common/autotest_common.sh@10 -- # set +x 00:26:19.622 ************************************ 00:26:19.622 START TEST ftl_dirty_shutdown 00:26:19.622 ************************************ 00:26:19.622 18:35:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:26:19.622 * Looking for test storage... 00:26:19.622 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:26:19.622 18:35:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:19.622 18:35:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:26:19.622 18:35:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:19.622 18:35:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:19.622 18:35:38 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:19.622 18:35:38 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:19.622 18:35:38 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:19.622 18:35:38 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:26:19.622 18:35:38 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:26:19.622 18:35:38 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:26:19.622 18:35:38 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:26:19.623 18:35:38 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:26:19.623 18:35:38 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:26:19.623 18:35:38 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:26:19.623 18:35:38 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:19.623 18:35:38 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:26:19.623 18:35:38 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:26:19.623 18:35:38 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:19.623 18:35:38 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:19.623 18:35:38 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:26:19.623 18:35:38 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:26:19.623 18:35:38 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:19.623 18:35:38 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:26:19.623 18:35:38 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:26:19.623 18:35:38 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:26:19.623 18:35:38 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:26:19.623 18:35:38 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:19.623 18:35:38 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:26:19.623 18:35:38 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:26:19.623 18:35:38 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:19.623 18:35:38 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:19.623 18:35:38 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:26:19.623 18:35:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:19.623 18:35:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:19.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:19.623 --rc genhtml_branch_coverage=1 00:26:19.623 --rc genhtml_function_coverage=1 00:26:19.623 --rc genhtml_legend=1 00:26:19.623 --rc geninfo_all_blocks=1 00:26:19.623 --rc geninfo_unexecuted_blocks=1 00:26:19.623 00:26:19.623 ' 00:26:19.623 18:35:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:19.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:19.623 --rc genhtml_branch_coverage=1 00:26:19.623 --rc genhtml_function_coverage=1 00:26:19.623 --rc genhtml_legend=1 00:26:19.623 --rc geninfo_all_blocks=1 00:26:19.623 --rc geninfo_unexecuted_blocks=1 00:26:19.623 00:26:19.623 ' 00:26:19.623 18:35:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:19.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:19.623 --rc genhtml_branch_coverage=1 00:26:19.623 --rc genhtml_function_coverage=1 00:26:19.623 --rc genhtml_legend=1 00:26:19.623 --rc geninfo_all_blocks=1 00:26:19.623 --rc geninfo_unexecuted_blocks=1 00:26:19.623 00:26:19.623 ' 00:26:19.623 18:35:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:19.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:19.623 --rc genhtml_branch_coverage=1 00:26:19.623 --rc genhtml_function_coverage=1 00:26:19.623 --rc genhtml_legend=1 00:26:19.623 --rc geninfo_all_blocks=1 00:26:19.623 --rc geninfo_unexecuted_blocks=1 00:26:19.623 00:26:19.623 ' 00:26:19.623 18:35:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:26:19.623 18:35:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:26:19.623 18:35:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:26:19.623 18:35:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:26:19.623 18:35:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:26:19.623 18:35:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:26:19.623 18:35:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:19.623 18:35:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:26:19.623 18:35:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:26:19.623 18:35:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:19.623 18:35:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:19.623 18:35:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:26:19.623 18:35:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:26:19.623 18:35:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:19.623 18:35:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:19.623 18:35:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:26:19.623 18:35:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:26:19.623 18:35:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:19.623 18:35:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:19.623 18:35:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:26:19.623 18:35:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:26:19.623 18:35:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:19.623 18:35:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:19.623 18:35:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:19.623 18:35:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:19.623 18:35:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:26:19.623 18:35:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:26:19.623 18:35:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:19.623 18:35:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:19.623 18:35:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:19.623 18:35:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:19.623 18:35:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:26:19.623 18:35:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:26:19.623 18:35:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:26:19.623 18:35:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:26:19.623 18:35:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:26:19.623 18:35:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:26:19.623 18:35:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:26:19.623 18:35:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:26:19.623 18:35:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:26:19.623 18:35:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:26:19.623 18:35:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:26:19.623 18:35:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=80441 00:26:19.623 18:35:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 80441 00:26:19.623 18:35:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:26:19.623 18:35:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 80441 ']' 00:26:19.623 18:35:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:19.623 18:35:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:19.623 18:35:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:19.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:19.623 18:35:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:19.623 18:35:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:19.623 [2024-11-20 18:35:38.116811] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:26:19.623 [2024-11-20 18:35:38.117082] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80441 ] 00:26:19.881 [2024-11-20 18:35:38.273900] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:19.881 [2024-11-20 18:35:38.355420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:20.448 18:35:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:20.448 18:35:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:26:20.448 18:35:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:26:20.448 18:35:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:26:20.448 18:35:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:26:20.448 18:35:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:26:20.448 18:35:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:26:20.448 18:35:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:26:20.714 18:35:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:26:20.714 18:35:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:26:20.714 18:35:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:26:20.714 18:35:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:26:20.714 18:35:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:20.714 18:35:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:26:20.714 18:35:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:26:20.714 18:35:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:26:20.972 18:35:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:20.972 { 00:26:20.972 "name": "nvme0n1", 00:26:20.972 "aliases": [ 00:26:20.972 "6732754d-841d-49ef-a0af-1553b5070ee7" 00:26:20.972 ], 00:26:20.972 "product_name": "NVMe disk", 00:26:20.972 "block_size": 4096, 00:26:20.972 "num_blocks": 1310720, 00:26:20.972 "uuid": "6732754d-841d-49ef-a0af-1553b5070ee7", 00:26:20.972 "numa_id": -1, 00:26:20.972 "assigned_rate_limits": { 00:26:20.972 "rw_ios_per_sec": 0, 00:26:20.972 "rw_mbytes_per_sec": 0, 00:26:20.972 "r_mbytes_per_sec": 0, 00:26:20.972 "w_mbytes_per_sec": 0 00:26:20.972 }, 00:26:20.972 "claimed": true, 00:26:20.972 "claim_type": "read_many_write_one", 00:26:20.972 "zoned": false, 00:26:20.972 "supported_io_types": { 00:26:20.972 "read": true, 00:26:20.972 "write": true, 00:26:20.972 "unmap": true, 00:26:20.972 "flush": true, 00:26:20.972 "reset": true, 00:26:20.972 "nvme_admin": true, 00:26:20.972 "nvme_io": true, 00:26:20.972 "nvme_io_md": false, 00:26:20.972 "write_zeroes": true, 00:26:20.972 "zcopy": false, 00:26:20.972 "get_zone_info": false, 00:26:20.972 "zone_management": false, 00:26:20.972 "zone_append": false, 00:26:20.972 "compare": true, 00:26:20.972 "compare_and_write": false, 00:26:20.972 "abort": true, 00:26:20.972 "seek_hole": false, 00:26:20.972 "seek_data": false, 00:26:20.972 "copy": true, 00:26:20.972 "nvme_iov_md": false 00:26:20.972 }, 00:26:20.972 "driver_specific": { 00:26:20.972 "nvme": [ 00:26:20.972 { 00:26:20.972 "pci_address": "0000:00:11.0", 00:26:20.972 "trid": { 00:26:20.972 "trtype": "PCIe", 00:26:20.972 "traddr": "0000:00:11.0" 00:26:20.972 }, 00:26:20.972 "ctrlr_data": { 00:26:20.972 "cntlid": 0, 00:26:20.972 "vendor_id": "0x1b36", 00:26:20.972 "model_number": "QEMU NVMe Ctrl", 00:26:20.972 "serial_number": "12341", 00:26:20.972 "firmware_revision": "8.0.0", 00:26:20.972 "subnqn": "nqn.2019-08.org.qemu:12341", 00:26:20.972 "oacs": { 00:26:20.972 "security": 0, 00:26:20.972 "format": 1, 00:26:20.972 "firmware": 0, 00:26:20.972 "ns_manage": 1 00:26:20.972 }, 00:26:20.972 "multi_ctrlr": false, 00:26:20.972 "ana_reporting": false 00:26:20.972 }, 00:26:20.972 "vs": { 00:26:20.972 "nvme_version": "1.4" 00:26:20.972 }, 00:26:20.972 "ns_data": { 00:26:20.972 "id": 1, 00:26:20.972 "can_share": false 00:26:20.972 } 00:26:20.972 } 00:26:20.972 ], 00:26:20.972 "mp_policy": "active_passive" 00:26:20.972 } 00:26:20.972 } 00:26:20.972 ]' 00:26:20.972 18:35:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:20.972 18:35:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:26:20.972 18:35:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:20.972 18:35:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:26:20.972 18:35:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:26:20.972 18:35:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:26:20.972 18:35:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:26:20.972 18:35:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:26:20.972 18:35:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:26:20.972 18:35:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:20.972 18:35:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:26:21.230 18:35:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=94c8dd5f-1fe7-45c0-823e-03198908fa29 00:26:21.230 18:35:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:26:21.230 18:35:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 94c8dd5f-1fe7-45c0-823e-03198908fa29 00:26:21.230 18:35:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:26:21.487 18:35:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=da218ca9-3456-4d0b-9d3e-a4fe1c9f1d18 00:26:21.488 18:35:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u da218ca9-3456-4d0b-9d3e-a4fe1c9f1d18 00:26:21.746 18:35:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=7132f905-3403-4c74-ba9a-9bb43845cbcc 00:26:21.746 18:35:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:26:21.746 18:35:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 7132f905-3403-4c74-ba9a-9bb43845cbcc 00:26:21.746 18:35:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:26:21.746 18:35:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:26:21.746 18:35:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=7132f905-3403-4c74-ba9a-9bb43845cbcc 00:26:21.746 18:35:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:26:21.746 18:35:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 7132f905-3403-4c74-ba9a-9bb43845cbcc 00:26:21.746 18:35:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=7132f905-3403-4c74-ba9a-9bb43845cbcc 00:26:21.746 18:35:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:21.746 18:35:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:26:21.746 18:35:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:26:21.746 18:35:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7132f905-3403-4c74-ba9a-9bb43845cbcc 00:26:22.004 18:35:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:22.004 { 00:26:22.004 "name": "7132f905-3403-4c74-ba9a-9bb43845cbcc", 00:26:22.004 "aliases": [ 00:26:22.004 "lvs/nvme0n1p0" 00:26:22.004 ], 00:26:22.004 "product_name": "Logical Volume", 00:26:22.005 "block_size": 4096, 00:26:22.005 "num_blocks": 26476544, 00:26:22.005 "uuid": "7132f905-3403-4c74-ba9a-9bb43845cbcc", 00:26:22.005 "assigned_rate_limits": { 00:26:22.005 "rw_ios_per_sec": 0, 00:26:22.005 "rw_mbytes_per_sec": 0, 00:26:22.005 "r_mbytes_per_sec": 0, 00:26:22.005 "w_mbytes_per_sec": 0 00:26:22.005 }, 00:26:22.005 "claimed": false, 00:26:22.005 "zoned": false, 00:26:22.005 "supported_io_types": { 00:26:22.005 "read": true, 00:26:22.005 "write": true, 00:26:22.005 "unmap": true, 00:26:22.005 "flush": false, 00:26:22.005 "reset": true, 00:26:22.005 "nvme_admin": false, 00:26:22.005 "nvme_io": false, 00:26:22.005 "nvme_io_md": false, 00:26:22.005 "write_zeroes": true, 00:26:22.005 "zcopy": false, 00:26:22.005 "get_zone_info": false, 00:26:22.005 "zone_management": false, 00:26:22.005 "zone_append": false, 00:26:22.005 "compare": false, 00:26:22.005 "compare_and_write": false, 00:26:22.005 "abort": false, 00:26:22.005 "seek_hole": true, 00:26:22.005 "seek_data": true, 00:26:22.005 "copy": false, 00:26:22.005 "nvme_iov_md": false 00:26:22.005 }, 00:26:22.005 "driver_specific": { 00:26:22.005 "lvol": { 00:26:22.005 "lvol_store_uuid": "da218ca9-3456-4d0b-9d3e-a4fe1c9f1d18", 00:26:22.005 "base_bdev": "nvme0n1", 00:26:22.005 "thin_provision": true, 00:26:22.005 "num_allocated_clusters": 0, 00:26:22.005 "snapshot": false, 00:26:22.005 "clone": false, 00:26:22.005 "esnap_clone": false 00:26:22.005 } 00:26:22.005 } 00:26:22.005 } 00:26:22.005 ]' 00:26:22.005 18:35:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:22.005 18:35:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:26:22.005 18:35:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:22.005 18:35:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:26:22.005 18:35:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:26:22.005 18:35:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:26:22.005 18:35:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:26:22.005 18:35:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:26:22.005 18:35:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:26:22.263 18:35:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:26:22.263 18:35:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:26:22.263 18:35:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 7132f905-3403-4c74-ba9a-9bb43845cbcc 00:26:22.263 18:35:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=7132f905-3403-4c74-ba9a-9bb43845cbcc 00:26:22.263 18:35:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:22.263 18:35:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:26:22.263 18:35:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:26:22.263 18:35:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7132f905-3403-4c74-ba9a-9bb43845cbcc 00:26:22.520 18:35:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:22.520 { 00:26:22.520 "name": "7132f905-3403-4c74-ba9a-9bb43845cbcc", 00:26:22.520 "aliases": [ 00:26:22.520 "lvs/nvme0n1p0" 00:26:22.520 ], 00:26:22.520 "product_name": "Logical Volume", 00:26:22.520 "block_size": 4096, 00:26:22.520 "num_blocks": 26476544, 00:26:22.520 "uuid": "7132f905-3403-4c74-ba9a-9bb43845cbcc", 00:26:22.520 "assigned_rate_limits": { 00:26:22.520 "rw_ios_per_sec": 0, 00:26:22.520 "rw_mbytes_per_sec": 0, 00:26:22.520 "r_mbytes_per_sec": 0, 00:26:22.520 "w_mbytes_per_sec": 0 00:26:22.520 }, 00:26:22.520 "claimed": false, 00:26:22.520 "zoned": false, 00:26:22.520 "supported_io_types": { 00:26:22.520 "read": true, 00:26:22.520 "write": true, 00:26:22.520 "unmap": true, 00:26:22.520 "flush": false, 00:26:22.520 "reset": true, 00:26:22.520 "nvme_admin": false, 00:26:22.520 "nvme_io": false, 00:26:22.520 "nvme_io_md": false, 00:26:22.520 "write_zeroes": true, 00:26:22.520 "zcopy": false, 00:26:22.520 "get_zone_info": false, 00:26:22.520 "zone_management": false, 00:26:22.520 "zone_append": false, 00:26:22.520 "compare": false, 00:26:22.520 "compare_and_write": false, 00:26:22.520 "abort": false, 00:26:22.520 "seek_hole": true, 00:26:22.520 "seek_data": true, 00:26:22.520 "copy": false, 00:26:22.521 "nvme_iov_md": false 00:26:22.521 }, 00:26:22.521 "driver_specific": { 00:26:22.521 "lvol": { 00:26:22.521 "lvol_store_uuid": "da218ca9-3456-4d0b-9d3e-a4fe1c9f1d18", 00:26:22.521 "base_bdev": "nvme0n1", 00:26:22.521 "thin_provision": true, 00:26:22.521 "num_allocated_clusters": 0, 00:26:22.521 "snapshot": false, 00:26:22.521 "clone": false, 00:26:22.521 "esnap_clone": false 00:26:22.521 } 00:26:22.521 } 00:26:22.521 } 00:26:22.521 ]' 00:26:22.521 18:35:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:22.521 18:35:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:26:22.521 18:35:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:22.521 18:35:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:26:22.521 18:35:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:26:22.521 18:35:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:26:22.521 18:35:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:26:22.521 18:35:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:26:22.778 18:35:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:26:22.778 18:35:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 7132f905-3403-4c74-ba9a-9bb43845cbcc 00:26:22.778 18:35:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=7132f905-3403-4c74-ba9a-9bb43845cbcc 00:26:22.778 18:35:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:22.778 18:35:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:26:22.778 18:35:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:26:22.778 18:35:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7132f905-3403-4c74-ba9a-9bb43845cbcc 00:26:22.778 18:35:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:22.778 { 00:26:22.778 "name": "7132f905-3403-4c74-ba9a-9bb43845cbcc", 00:26:22.778 "aliases": [ 00:26:22.778 "lvs/nvme0n1p0" 00:26:22.778 ], 00:26:22.778 "product_name": "Logical Volume", 00:26:22.778 "block_size": 4096, 00:26:22.778 "num_blocks": 26476544, 00:26:22.778 "uuid": "7132f905-3403-4c74-ba9a-9bb43845cbcc", 00:26:22.778 "assigned_rate_limits": { 00:26:22.778 "rw_ios_per_sec": 0, 00:26:22.778 "rw_mbytes_per_sec": 0, 00:26:22.778 "r_mbytes_per_sec": 0, 00:26:22.778 "w_mbytes_per_sec": 0 00:26:22.778 }, 00:26:22.778 "claimed": false, 00:26:22.778 "zoned": false, 00:26:22.778 "supported_io_types": { 00:26:22.778 "read": true, 00:26:22.778 "write": true, 00:26:22.778 "unmap": true, 00:26:22.778 "flush": false, 00:26:22.778 "reset": true, 00:26:22.778 "nvme_admin": false, 00:26:22.778 "nvme_io": false, 00:26:22.778 "nvme_io_md": false, 00:26:22.778 "write_zeroes": true, 00:26:22.778 "zcopy": false, 00:26:22.778 "get_zone_info": false, 00:26:22.778 "zone_management": false, 00:26:22.778 "zone_append": false, 00:26:22.778 "compare": false, 00:26:22.778 "compare_and_write": false, 00:26:22.778 "abort": false, 00:26:22.778 "seek_hole": true, 00:26:22.778 "seek_data": true, 00:26:22.778 "copy": false, 00:26:22.778 "nvme_iov_md": false 00:26:22.778 }, 00:26:22.778 "driver_specific": { 00:26:22.778 "lvol": { 00:26:22.778 "lvol_store_uuid": "da218ca9-3456-4d0b-9d3e-a4fe1c9f1d18", 00:26:22.778 "base_bdev": "nvme0n1", 00:26:22.778 "thin_provision": true, 00:26:22.778 "num_allocated_clusters": 0, 00:26:22.778 "snapshot": false, 00:26:22.778 "clone": false, 00:26:22.778 "esnap_clone": false 00:26:22.778 } 00:26:22.778 } 00:26:22.778 } 00:26:22.778 ]' 00:26:22.778 18:35:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:23.037 18:35:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:26:23.037 18:35:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:23.037 18:35:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:26:23.037 18:35:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:26:23.037 18:35:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:26:23.037 18:35:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:26:23.037 18:35:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 7132f905-3403-4c74-ba9a-9bb43845cbcc --l2p_dram_limit 10' 00:26:23.037 18:35:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:26:23.037 18:35:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:26:23.037 18:35:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:26:23.037 18:35:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 7132f905-3403-4c74-ba9a-9bb43845cbcc --l2p_dram_limit 10 -c nvc0n1p0 00:26:23.037 [2024-11-20 18:35:41.645512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.037 [2024-11-20 18:35:41.645549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:23.037 [2024-11-20 18:35:41.645562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:23.037 [2024-11-20 18:35:41.645569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.037 [2024-11-20 18:35:41.645613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.037 [2024-11-20 18:35:41.645620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:23.037 [2024-11-20 18:35:41.645628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:26:23.037 [2024-11-20 18:35:41.645634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.037 [2024-11-20 18:35:41.645653] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:23.037 [2024-11-20 18:35:41.646225] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:23.037 [2024-11-20 18:35:41.646242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.037 [2024-11-20 18:35:41.646248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:23.037 [2024-11-20 18:35:41.646256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.593 ms 00:26:23.037 [2024-11-20 18:35:41.646262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.037 [2024-11-20 18:35:41.646303] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 0be3887f-60af-4a3e-bbbe-2f44c2e518c4 00:26:23.037 [2024-11-20 18:35:41.647255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.037 [2024-11-20 18:35:41.647279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:26:23.037 [2024-11-20 18:35:41.647287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:26:23.037 [2024-11-20 18:35:41.647295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.037 [2024-11-20 18:35:41.652028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.037 [2024-11-20 18:35:41.652058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:23.037 [2024-11-20 18:35:41.652067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.676 ms 00:26:23.037 [2024-11-20 18:35:41.652074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.037 [2024-11-20 18:35:41.652150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.037 [2024-11-20 18:35:41.652159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:23.037 [2024-11-20 18:35:41.652165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:26:23.037 [2024-11-20 18:35:41.652175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.037 [2024-11-20 18:35:41.652215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.037 [2024-11-20 18:35:41.652225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:23.037 [2024-11-20 18:35:41.652231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:23.037 [2024-11-20 18:35:41.652240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.037 [2024-11-20 18:35:41.652256] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:23.037 [2024-11-20 18:35:41.655246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.037 [2024-11-20 18:35:41.655329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:23.037 [2024-11-20 18:35:41.655377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.993 ms 00:26:23.037 [2024-11-20 18:35:41.655396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.037 [2024-11-20 18:35:41.655433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.037 [2024-11-20 18:35:41.655450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:23.037 [2024-11-20 18:35:41.655467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:26:23.037 [2024-11-20 18:35:41.655512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.037 [2024-11-20 18:35:41.655541] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:26:23.037 [2024-11-20 18:35:41.655655] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:23.037 [2024-11-20 18:35:41.655737] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:23.037 [2024-11-20 18:35:41.655788] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:23.037 [2024-11-20 18:35:41.655817] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:23.037 [2024-11-20 18:35:41.655843] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:23.037 [2024-11-20 18:35:41.655867] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:23.037 [2024-11-20 18:35:41.655883] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:23.038 [2024-11-20 18:35:41.655902] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:23.038 [2024-11-20 18:35:41.655944] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:23.038 [2024-11-20 18:35:41.655963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.038 [2024-11-20 18:35:41.655979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:23.038 [2024-11-20 18:35:41.655995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.423 ms 00:26:23.038 [2024-11-20 18:35:41.656018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.038 [2024-11-20 18:35:41.656104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.038 [2024-11-20 18:35:41.656169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:23.038 [2024-11-20 18:35:41.656190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:26:23.038 [2024-11-20 18:35:41.656205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.038 [2024-11-20 18:35:41.656308] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:23.038 [2024-11-20 18:35:41.656380] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:23.038 [2024-11-20 18:35:41.656402] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:23.038 [2024-11-20 18:35:41.656418] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:23.038 [2024-11-20 18:35:41.656436] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:23.038 [2024-11-20 18:35:41.656450] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:23.038 [2024-11-20 18:35:41.656467] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:23.038 [2024-11-20 18:35:41.656482] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:23.038 [2024-11-20 18:35:41.656498] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:23.038 [2024-11-20 18:35:41.656553] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:23.038 [2024-11-20 18:35:41.656573] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:23.038 [2024-11-20 18:35:41.656588] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:23.038 [2024-11-20 18:35:41.656604] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:23.038 [2024-11-20 18:35:41.656619] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:23.038 [2024-11-20 18:35:41.656634] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:23.038 [2024-11-20 18:35:41.656649] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:23.038 [2024-11-20 18:35:41.656697] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:23.038 [2024-11-20 18:35:41.656716] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:23.038 [2024-11-20 18:35:41.656731] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:23.038 [2024-11-20 18:35:41.656747] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:23.038 [2024-11-20 18:35:41.656762] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:23.038 [2024-11-20 18:35:41.656778] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:23.038 [2024-11-20 18:35:41.656793] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:23.038 [2024-11-20 18:35:41.656809] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:23.038 [2024-11-20 18:35:41.656865] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:23.038 [2024-11-20 18:35:41.656880] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:23.038 [2024-11-20 18:35:41.656895] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:23.038 [2024-11-20 18:35:41.656910] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:23.038 [2024-11-20 18:35:41.656926] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:23.038 [2024-11-20 18:35:41.656951] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:23.038 [2024-11-20 18:35:41.656990] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:23.038 [2024-11-20 18:35:41.657006] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:23.038 [2024-11-20 18:35:41.657025] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:23.038 [2024-11-20 18:35:41.657064] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:23.038 [2024-11-20 18:35:41.657084] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:23.038 [2024-11-20 18:35:41.657114] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:23.038 [2024-11-20 18:35:41.657153] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:23.038 [2024-11-20 18:35:41.657170] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:23.038 [2024-11-20 18:35:41.657186] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:23.038 [2024-11-20 18:35:41.657202] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:23.038 [2024-11-20 18:35:41.657209] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:23.038 [2024-11-20 18:35:41.657215] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:23.038 [2024-11-20 18:35:41.657222] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:23.038 [2024-11-20 18:35:41.657228] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:23.038 [2024-11-20 18:35:41.657237] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:23.038 [2024-11-20 18:35:41.657243] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:23.038 [2024-11-20 18:35:41.657251] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:23.038 [2024-11-20 18:35:41.657258] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:23.038 [2024-11-20 18:35:41.657265] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:23.038 [2024-11-20 18:35:41.657271] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:23.038 [2024-11-20 18:35:41.657277] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:23.038 [2024-11-20 18:35:41.657283] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:23.038 [2024-11-20 18:35:41.657290] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:23.038 [2024-11-20 18:35:41.657298] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:23.038 [2024-11-20 18:35:41.657307] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:23.038 [2024-11-20 18:35:41.657316] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:23.038 [2024-11-20 18:35:41.657323] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:23.038 [2024-11-20 18:35:41.657329] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:23.038 [2024-11-20 18:35:41.657336] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:23.038 [2024-11-20 18:35:41.657341] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:23.038 [2024-11-20 18:35:41.657348] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:23.038 [2024-11-20 18:35:41.657355] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:23.038 [2024-11-20 18:35:41.657363] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:23.038 [2024-11-20 18:35:41.657369] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:23.038 [2024-11-20 18:35:41.657377] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:23.038 [2024-11-20 18:35:41.657383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:23.038 [2024-11-20 18:35:41.657391] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:23.038 [2024-11-20 18:35:41.657396] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:23.038 [2024-11-20 18:35:41.657403] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:23.038 [2024-11-20 18:35:41.657409] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:23.038 [2024-11-20 18:35:41.657417] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:23.038 [2024-11-20 18:35:41.657424] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:23.038 [2024-11-20 18:35:41.657431] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:23.038 [2024-11-20 18:35:41.657437] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:23.038 [2024-11-20 18:35:41.657443] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:23.038 [2024-11-20 18:35:41.657451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.038 [2024-11-20 18:35:41.657458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:23.038 [2024-11-20 18:35:41.657464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.197 ms 00:26:23.038 [2024-11-20 18:35:41.657471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.038 [2024-11-20 18:35:41.657510] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:26:23.038 [2024-11-20 18:35:41.657521] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:26:26.332 [2024-11-20 18:35:44.830231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.332 [2024-11-20 18:35:44.830305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:26:26.332 [2024-11-20 18:35:44.830324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3172.706 ms 00:26:26.332 [2024-11-20 18:35:44.830335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.332 [2024-11-20 18:35:44.859699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.332 [2024-11-20 18:35:44.859758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:26.332 [2024-11-20 18:35:44.859772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.136 ms 00:26:26.332 [2024-11-20 18:35:44.859783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.332 [2024-11-20 18:35:44.859918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.332 [2024-11-20 18:35:44.859931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:26.332 [2024-11-20 18:35:44.859941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:26:26.332 [2024-11-20 18:35:44.859955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.332 [2024-11-20 18:35:44.894423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.332 [2024-11-20 18:35:44.894649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:26.332 [2024-11-20 18:35:44.894672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.408 ms 00:26:26.332 [2024-11-20 18:35:44.894684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.332 [2024-11-20 18:35:44.894723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.332 [2024-11-20 18:35:44.894742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:26.332 [2024-11-20 18:35:44.894750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:26.332 [2024-11-20 18:35:44.894761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.332 [2024-11-20 18:35:44.895360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.332 [2024-11-20 18:35:44.895390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:26.332 [2024-11-20 18:35:44.895401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.543 ms 00:26:26.332 [2024-11-20 18:35:44.895411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.332 [2024-11-20 18:35:44.895525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.333 [2024-11-20 18:35:44.895539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:26.333 [2024-11-20 18:35:44.895551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:26:26.333 [2024-11-20 18:35:44.895566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.333 [2024-11-20 18:35:44.913559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.333 [2024-11-20 18:35:44.913742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:26.333 [2024-11-20 18:35:44.913762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.972 ms 00:26:26.333 [2024-11-20 18:35:44.913774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.333 [2024-11-20 18:35:44.927209] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:26.333 [2024-11-20 18:35:44.931247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.333 [2024-11-20 18:35:44.931293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:26.333 [2024-11-20 18:35:44.931308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.372 ms 00:26:26.333 [2024-11-20 18:35:44.931317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.592 [2024-11-20 18:35:45.030309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.592 [2024-11-20 18:35:45.030377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:26:26.592 [2024-11-20 18:35:45.030399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 98.953 ms 00:26:26.592 [2024-11-20 18:35:45.030408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.592 [2024-11-20 18:35:45.030628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.592 [2024-11-20 18:35:45.030644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:26.592 [2024-11-20 18:35:45.030660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.159 ms 00:26:26.592 [2024-11-20 18:35:45.030668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.592 [2024-11-20 18:35:45.056898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.592 [2024-11-20 18:35:45.057121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:26:26.592 [2024-11-20 18:35:45.057150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.162 ms 00:26:26.592 [2024-11-20 18:35:45.057161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.592 [2024-11-20 18:35:45.082130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.592 [2024-11-20 18:35:45.082174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:26:26.592 [2024-11-20 18:35:45.082190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.856 ms 00:26:26.592 [2024-11-20 18:35:45.082198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.592 [2024-11-20 18:35:45.082813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.592 [2024-11-20 18:35:45.082836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:26.592 [2024-11-20 18:35:45.082849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.562 ms 00:26:26.592 [2024-11-20 18:35:45.082858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.592 [2024-11-20 18:35:45.165135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.592 [2024-11-20 18:35:45.165186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:26:26.592 [2024-11-20 18:35:45.165207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.211 ms 00:26:26.592 [2024-11-20 18:35:45.165217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.592 [2024-11-20 18:35:45.192519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.592 [2024-11-20 18:35:45.192714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:26:26.592 [2024-11-20 18:35:45.192743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.180 ms 00:26:26.592 [2024-11-20 18:35:45.192753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.853 [2024-11-20 18:35:45.219030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.853 [2024-11-20 18:35:45.219077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:26:26.853 [2024-11-20 18:35:45.219107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.191 ms 00:26:26.853 [2024-11-20 18:35:45.219116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.853 [2024-11-20 18:35:45.245697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.853 [2024-11-20 18:35:45.245747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:26.853 [2024-11-20 18:35:45.245763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.521 ms 00:26:26.853 [2024-11-20 18:35:45.245770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.853 [2024-11-20 18:35:45.245827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.853 [2024-11-20 18:35:45.245837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:26.853 [2024-11-20 18:35:45.245851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:26.853 [2024-11-20 18:35:45.245859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.853 [2024-11-20 18:35:45.245955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.853 [2024-11-20 18:35:45.245967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:26.853 [2024-11-20 18:35:45.245980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:26:26.853 [2024-11-20 18:35:45.245988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.853 [2024-11-20 18:35:45.247293] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3601.235 ms, result 0 00:26:26.853 { 00:26:26.853 "name": "ftl0", 00:26:26.853 "uuid": "0be3887f-60af-4a3e-bbbe-2f44c2e518c4" 00:26:26.853 } 00:26:26.853 18:35:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:26:26.853 18:35:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:26:27.112 18:35:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:26:27.112 18:35:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:26:27.112 18:35:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:26:27.112 /dev/nbd0 00:26:27.112 18:35:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:26:27.112 18:35:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:26:27.112 18:35:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:26:27.112 18:35:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:26:27.112 18:35:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:26:27.112 18:35:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:26:27.372 18:35:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:26:27.372 18:35:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:26:27.372 18:35:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:26:27.372 18:35:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:26:27.372 1+0 records in 00:26:27.372 1+0 records out 00:26:27.372 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000630317 s, 6.5 MB/s 00:26:27.372 18:35:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:26:27.372 18:35:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:26:27.372 18:35:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:26:27.372 18:35:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:26:27.372 18:35:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:26:27.372 18:35:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:26:27.372 [2024-11-20 18:35:45.832489] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:26:27.372 [2024-11-20 18:35:45.832645] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80580 ] 00:26:27.632 [2024-11-20 18:35:45.999834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:27.632 [2024-11-20 18:35:46.126717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:29.014  [2024-11-20T18:35:48.607Z] Copying: 189/1024 [MB] (189 MBps) [2024-11-20T18:35:49.561Z] Copying: 379/1024 [MB] (189 MBps) [2024-11-20T18:35:50.492Z] Copying: 574/1024 [MB] (194 MBps) [2024-11-20T18:35:51.425Z] Copying: 819/1024 [MB] (245 MBps) [2024-11-20T18:35:51.991Z] Copying: 1024/1024 [MB] (average 212 MBps) 00:26:33.362 00:26:33.362 18:35:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:35.271 18:35:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:26:35.271 [2024-11-20 18:35:53.827766] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:26:35.271 [2024-11-20 18:35:53.828048] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80667 ] 00:26:35.532 [2024-11-20 18:35:53.988645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:35.532 [2024-11-20 18:35:54.081262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:36.919  [2024-11-20T18:35:56.484Z] Copying: 13/1024 [MB] (13 MBps) [2024-11-20T18:35:57.417Z] Copying: 39/1024 [MB] (25 MBps) [2024-11-20T18:35:58.348Z] Copying: 68/1024 [MB] (29 MBps) [2024-11-20T18:35:59.719Z] Copying: 96/1024 [MB] (28 MBps) [2024-11-20T18:36:00.650Z] Copying: 125/1024 [MB] (28 MBps) [2024-11-20T18:36:01.583Z] Copying: 157/1024 [MB] (31 MBps) [2024-11-20T18:36:02.514Z] Copying: 181/1024 [MB] (24 MBps) [2024-11-20T18:36:03.445Z] Copying: 212/1024 [MB] (30 MBps) [2024-11-20T18:36:04.378Z] Copying: 247/1024 [MB] (34 MBps) [2024-11-20T18:36:05.311Z] Copying: 281/1024 [MB] (34 MBps) [2024-11-20T18:36:06.685Z] Copying: 317/1024 [MB] (35 MBps) [2024-11-20T18:36:07.618Z] Copying: 350/1024 [MB] (33 MBps) [2024-11-20T18:36:08.549Z] Copying: 375/1024 [MB] (25 MBps) [2024-11-20T18:36:09.486Z] Copying: 407/1024 [MB] (31 MBps) [2024-11-20T18:36:10.419Z] Copying: 442/1024 [MB] (34 MBps) [2024-11-20T18:36:11.352Z] Copying: 477/1024 [MB] (34 MBps) [2024-11-20T18:36:12.725Z] Copying: 513/1024 [MB] (35 MBps) [2024-11-20T18:36:13.658Z] Copying: 544/1024 [MB] (31 MBps) [2024-11-20T18:36:14.592Z] Copying: 578/1024 [MB] (34 MBps) [2024-11-20T18:36:15.525Z] Copying: 614/1024 [MB] (35 MBps) [2024-11-20T18:36:16.459Z] Copying: 649/1024 [MB] (34 MBps) [2024-11-20T18:36:17.447Z] Copying: 678/1024 [MB] (29 MBps) [2024-11-20T18:36:18.394Z] Copying: 707/1024 [MB] (28 MBps) [2024-11-20T18:36:19.337Z] Copying: 741/1024 [MB] (34 MBps) [2024-11-20T18:36:20.714Z] Copying: 776/1024 [MB] (34 MBps) [2024-11-20T18:36:21.648Z] Copying: 804/1024 [MB] (28 MBps) [2024-11-20T18:36:22.583Z] Copying: 827/1024 [MB] (22 MBps) [2024-11-20T18:36:23.519Z] Copying: 848/1024 [MB] (20 MBps) [2024-11-20T18:36:24.454Z] Copying: 881/1024 [MB] (33 MBps) [2024-11-20T18:36:25.388Z] Copying: 911/1024 [MB] (29 MBps) [2024-11-20T18:36:26.321Z] Copying: 945/1024 [MB] (34 MBps) [2024-11-20T18:36:27.691Z] Copying: 980/1024 [MB] (35 MBps) [2024-11-20T18:36:27.948Z] Copying: 1011/1024 [MB] (30 MBps) [2024-11-20T18:36:28.515Z] Copying: 1024/1024 [MB] (average 30 MBps) 00:27:09.886 00:27:10.147 18:36:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:27:10.147 18:36:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:27:10.147 18:36:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:27:10.408 [2024-11-20 18:36:28.906160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.408 [2024-11-20 18:36:28.906203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:10.408 [2024-11-20 18:36:28.906215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:10.408 [2024-11-20 18:36:28.906224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.408 [2024-11-20 18:36:28.906243] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:10.408 [2024-11-20 18:36:28.908442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.408 [2024-11-20 18:36:28.908475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:10.408 [2024-11-20 18:36:28.908486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.181 ms 00:27:10.408 [2024-11-20 18:36:28.908492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.408 [2024-11-20 18:36:28.911027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.408 [2024-11-20 18:36:28.911054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:10.408 [2024-11-20 18:36:28.911064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.508 ms 00:27:10.408 [2024-11-20 18:36:28.911070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.408 [2024-11-20 18:36:28.926612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.408 [2024-11-20 18:36:28.926641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:10.408 [2024-11-20 18:36:28.926651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.524 ms 00:27:10.408 [2024-11-20 18:36:28.926658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.408 [2024-11-20 18:36:28.931341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.408 [2024-11-20 18:36:28.931363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:10.408 [2024-11-20 18:36:28.931373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.654 ms 00:27:10.408 [2024-11-20 18:36:28.931379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.408 [2024-11-20 18:36:28.950973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.408 [2024-11-20 18:36:28.950999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:10.408 [2024-11-20 18:36:28.951009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.542 ms 00:27:10.408 [2024-11-20 18:36:28.951015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.409 [2024-11-20 18:36:28.964315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.409 [2024-11-20 18:36:28.964342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:10.409 [2024-11-20 18:36:28.964354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.266 ms 00:27:10.409 [2024-11-20 18:36:28.964363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.409 [2024-11-20 18:36:28.964485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.409 [2024-11-20 18:36:28.964494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:10.409 [2024-11-20 18:36:28.964503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:27:10.409 [2024-11-20 18:36:28.964510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.409 [2024-11-20 18:36:28.982975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.409 [2024-11-20 18:36:28.983000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:10.409 [2024-11-20 18:36:28.983010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.450 ms 00:27:10.409 [2024-11-20 18:36:28.983015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.409 [2024-11-20 18:36:29.001460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.409 [2024-11-20 18:36:29.001485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:10.409 [2024-11-20 18:36:29.001494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.414 ms 00:27:10.409 [2024-11-20 18:36:29.001500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.409 [2024-11-20 18:36:29.019428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.409 [2024-11-20 18:36:29.019453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:10.409 [2024-11-20 18:36:29.019462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.895 ms 00:27:10.409 [2024-11-20 18:36:29.019468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.671 [2024-11-20 18:36:29.037384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.671 [2024-11-20 18:36:29.037409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:10.671 [2024-11-20 18:36:29.037418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.855 ms 00:27:10.671 [2024-11-20 18:36:29.037424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.671 [2024-11-20 18:36:29.037452] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:10.671 [2024-11-20 18:36:29.037464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:10.671 [2024-11-20 18:36:29.037473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:10.671 [2024-11-20 18:36:29.037479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:10.671 [2024-11-20 18:36:29.037487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:10.671 [2024-11-20 18:36:29.037493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:10.671 [2024-11-20 18:36:29.037500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:10.671 [2024-11-20 18:36:29.037506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:10.671 [2024-11-20 18:36:29.037515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:10.671 [2024-11-20 18:36:29.037521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:10.672 [2024-11-20 18:36:29.037529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:10.672 [2024-11-20 18:36:29.037534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:10.672 [2024-11-20 18:36:29.037542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:10.672 [2024-11-20 18:36:29.037548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:10.672 [2024-11-20 18:36:29.037556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:10.672 [2024-11-20 18:36:29.037562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:10.672 [2024-11-20 18:36:29.037569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:10.672 [2024-11-20 18:36:29.037575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:10.672 [2024-11-20 18:36:29.037582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:10.672 [2024-11-20 18:36:29.037588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:10.672 [2024-11-20 18:36:29.037595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:10.672 [2024-11-20 18:36:29.037600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:10.672 [2024-11-20 18:36:29.037609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:10.672 [2024-11-20 18:36:29.037614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:10.672 [2024-11-20 18:36:29.037623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:10.672 [2024-11-20 18:36:29.037629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:10.672 [2024-11-20 18:36:29.037636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:10.672 [2024-11-20 18:36:29.037643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:10.672 [2024-11-20 18:36:29.037650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:10.672 [2024-11-20 18:36:29.037656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:10.672 [2024-11-20 18:36:29.037664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:10.672 [2024-11-20 18:36:29.037670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:10.672 [2024-11-20 18:36:29.037677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:10.672 [2024-11-20 18:36:29.037683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:10.672 [2024-11-20 18:36:29.037691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:10.672 [2024-11-20 18:36:29.037697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:10.672 [2024-11-20 18:36:29.037704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:10.672 [2024-11-20 18:36:29.037710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:10.672 [2024-11-20 18:36:29.037717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:10.672 [2024-11-20 18:36:29.037722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:10.672 [2024-11-20 18:36:29.037731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:10.672 [2024-11-20 18:36:29.037737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:10.672 [2024-11-20 18:36:29.037744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:10.672 [2024-11-20 18:36:29.037749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:10.672 [2024-11-20 18:36:29.037757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:10.672 [2024-11-20 18:36:29.037763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:10.672 [2024-11-20 18:36:29.037770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:10.672 [2024-11-20 18:36:29.037776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:10.672 [2024-11-20 18:36:29.037784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:10.672 [2024-11-20 18:36:29.037794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:10.672 [2024-11-20 18:36:29.037802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:10.672 [2024-11-20 18:36:29.037808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:10.672 [2024-11-20 18:36:29.037815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:10.672 [2024-11-20 18:36:29.037821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:10.672 [2024-11-20 18:36:29.037827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:10.672 [2024-11-20 18:36:29.037833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:10.672 [2024-11-20 18:36:29.037843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:10.672 [2024-11-20 18:36:29.037848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:10.672 [2024-11-20 18:36:29.037856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:10.672 [2024-11-20 18:36:29.037861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:10.672 [2024-11-20 18:36:29.037868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:10.672 [2024-11-20 18:36:29.037876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:10.672 [2024-11-20 18:36:29.037885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:10.672 [2024-11-20 18:36:29.037892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:10.672 [2024-11-20 18:36:29.037899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:10.672 [2024-11-20 18:36:29.037905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:10.672 [2024-11-20 18:36:29.037912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:10.672 [2024-11-20 18:36:29.037918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:10.672 [2024-11-20 18:36:29.037925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:10.672 [2024-11-20 18:36:29.037931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:10.672 [2024-11-20 18:36:29.037939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:10.672 [2024-11-20 18:36:29.037944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:10.672 [2024-11-20 18:36:29.037955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:10.672 [2024-11-20 18:36:29.037960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:10.672 [2024-11-20 18:36:29.037967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:10.672 [2024-11-20 18:36:29.037973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:10.672 [2024-11-20 18:36:29.037980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:10.672 [2024-11-20 18:36:29.037986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:10.672 [2024-11-20 18:36:29.037993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:10.672 [2024-11-20 18:36:29.037999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:10.672 [2024-11-20 18:36:29.038006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:10.672 [2024-11-20 18:36:29.038012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:10.672 [2024-11-20 18:36:29.038019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:10.672 [2024-11-20 18:36:29.038024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:10.672 [2024-11-20 18:36:29.038031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:10.672 [2024-11-20 18:36:29.038037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:10.672 [2024-11-20 18:36:29.038044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:10.672 [2024-11-20 18:36:29.038050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:10.672 [2024-11-20 18:36:29.038059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:10.672 [2024-11-20 18:36:29.038065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:10.672 [2024-11-20 18:36:29.038072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:10.672 [2024-11-20 18:36:29.038077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:10.672 [2024-11-20 18:36:29.038084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:10.672 [2024-11-20 18:36:29.038090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:10.672 [2024-11-20 18:36:29.038111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:10.672 [2024-11-20 18:36:29.038117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:10.672 [2024-11-20 18:36:29.038124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:10.672 [2024-11-20 18:36:29.038130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:10.673 [2024-11-20 18:36:29.038138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:10.673 [2024-11-20 18:36:29.038144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:10.673 [2024-11-20 18:36:29.038152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:10.673 [2024-11-20 18:36:29.038164] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:10.673 [2024-11-20 18:36:29.038172] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0be3887f-60af-4a3e-bbbe-2f44c2e518c4 00:27:10.673 [2024-11-20 18:36:29.038179] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:10.673 [2024-11-20 18:36:29.038188] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:10.673 [2024-11-20 18:36:29.038194] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:10.673 [2024-11-20 18:36:29.038203] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:10.673 [2024-11-20 18:36:29.038209] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:10.673 [2024-11-20 18:36:29.038216] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:10.673 [2024-11-20 18:36:29.038222] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:10.673 [2024-11-20 18:36:29.038229] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:10.673 [2024-11-20 18:36:29.038234] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:10.673 [2024-11-20 18:36:29.038241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.673 [2024-11-20 18:36:29.038247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:10.673 [2024-11-20 18:36:29.038256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.789 ms 00:27:10.673 [2024-11-20 18:36:29.038262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.673 [2024-11-20 18:36:29.048432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.673 [2024-11-20 18:36:29.048560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:10.673 [2024-11-20 18:36:29.048578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.145 ms 00:27:10.673 [2024-11-20 18:36:29.048584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.673 [2024-11-20 18:36:29.048875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.673 [2024-11-20 18:36:29.048882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:10.673 [2024-11-20 18:36:29.048891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:27:10.673 [2024-11-20 18:36:29.048896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.673 [2024-11-20 18:36:29.083837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:10.673 [2024-11-20 18:36:29.083957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:10.673 [2024-11-20 18:36:29.083973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:10.673 [2024-11-20 18:36:29.083980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.673 [2024-11-20 18:36:29.084029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:10.673 [2024-11-20 18:36:29.084036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:10.673 [2024-11-20 18:36:29.084044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:10.673 [2024-11-20 18:36:29.084050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.673 [2024-11-20 18:36:29.084156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:10.673 [2024-11-20 18:36:29.084166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:10.673 [2024-11-20 18:36:29.084177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:10.673 [2024-11-20 18:36:29.084183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.673 [2024-11-20 18:36:29.084200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:10.673 [2024-11-20 18:36:29.084207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:10.673 [2024-11-20 18:36:29.084214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:10.673 [2024-11-20 18:36:29.084221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.673 [2024-11-20 18:36:29.147491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:10.673 [2024-11-20 18:36:29.147527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:10.673 [2024-11-20 18:36:29.147538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:10.673 [2024-11-20 18:36:29.147544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.673 [2024-11-20 18:36:29.199329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:10.673 [2024-11-20 18:36:29.199362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:10.673 [2024-11-20 18:36:29.199373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:10.673 [2024-11-20 18:36:29.199380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.673 [2024-11-20 18:36:29.199454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:10.673 [2024-11-20 18:36:29.199462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:10.673 [2024-11-20 18:36:29.199471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:10.673 [2024-11-20 18:36:29.199479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.673 [2024-11-20 18:36:29.199539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:10.673 [2024-11-20 18:36:29.199548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:10.673 [2024-11-20 18:36:29.199556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:10.673 [2024-11-20 18:36:29.199562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.673 [2024-11-20 18:36:29.199643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:10.673 [2024-11-20 18:36:29.199652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:10.673 [2024-11-20 18:36:29.199661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:10.673 [2024-11-20 18:36:29.199667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.673 [2024-11-20 18:36:29.199697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:10.673 [2024-11-20 18:36:29.199705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:10.673 [2024-11-20 18:36:29.199713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:10.673 [2024-11-20 18:36:29.199718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.673 [2024-11-20 18:36:29.199756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:10.673 [2024-11-20 18:36:29.199763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:10.673 [2024-11-20 18:36:29.199771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:10.673 [2024-11-20 18:36:29.199777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.673 [2024-11-20 18:36:29.199822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:10.673 [2024-11-20 18:36:29.199831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:10.673 [2024-11-20 18:36:29.199838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:10.673 [2024-11-20 18:36:29.199846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.673 [2024-11-20 18:36:29.199964] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 293.766 ms, result 0 00:27:10.673 true 00:27:10.673 18:36:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 80441 00:27:10.673 18:36:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid80441 00:27:10.673 18:36:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:27:10.673 [2024-11-20 18:36:29.283541] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:27:10.673 [2024-11-20 18:36:29.283654] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81044 ] 00:27:10.934 [2024-11-20 18:36:29.429328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:10.934 [2024-11-20 18:36:29.521014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:12.320  [2024-11-20T18:36:31.888Z] Copying: 256/1024 [MB] (256 MBps) [2024-11-20T18:36:32.831Z] Copying: 515/1024 [MB] (259 MBps) [2024-11-20T18:36:33.897Z] Copying: 768/1024 [MB] (253 MBps) [2024-11-20T18:36:33.897Z] Copying: 1023/1024 [MB] (254 MBps) [2024-11-20T18:36:34.468Z] Copying: 1024/1024 [MB] (average 255 MBps) 00:27:15.839 00:27:15.839 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 80441 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:27:15.839 18:36:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:15.839 [2024-11-20 18:36:34.385938] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:27:15.839 [2024-11-20 18:36:34.386214] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81101 ] 00:27:16.100 [2024-11-20 18:36:34.542381] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:16.100 [2024-11-20 18:36:34.628208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:16.361 [2024-11-20 18:36:34.858571] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:16.361 [2024-11-20 18:36:34.858777] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:16.361 [2024-11-20 18:36:34.925090] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:27:16.361 [2024-11-20 18:36:34.925710] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:27:16.361 [2024-11-20 18:36:34.926321] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:27:16.934 [2024-11-20 18:36:35.301534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.934 [2024-11-20 18:36:35.301763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:16.934 [2024-11-20 18:36:35.301967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:27:16.934 [2024-11-20 18:36:35.302013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.934 [2024-11-20 18:36:35.302132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.934 [2024-11-20 18:36:35.302147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:16.934 [2024-11-20 18:36:35.302158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:27:16.934 [2024-11-20 18:36:35.302166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.934 [2024-11-20 18:36:35.302190] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:16.934 [2024-11-20 18:36:35.302972] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:16.934 [2024-11-20 18:36:35.303009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.934 [2024-11-20 18:36:35.303019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:16.934 [2024-11-20 18:36:35.303028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.825 ms 00:27:16.934 [2024-11-20 18:36:35.303036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.934 [2024-11-20 18:36:35.304823] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:16.934 [2024-11-20 18:36:35.319366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.934 [2024-11-20 18:36:35.319424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:16.934 [2024-11-20 18:36:35.319439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.546 ms 00:27:16.934 [2024-11-20 18:36:35.319447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.934 [2024-11-20 18:36:35.319524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.934 [2024-11-20 18:36:35.319534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:16.934 [2024-11-20 18:36:35.319543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:27:16.934 [2024-11-20 18:36:35.319551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.934 [2024-11-20 18:36:35.328128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.934 [2024-11-20 18:36:35.328172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:16.934 [2024-11-20 18:36:35.328183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.493 ms 00:27:16.934 [2024-11-20 18:36:35.328191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.934 [2024-11-20 18:36:35.328275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.934 [2024-11-20 18:36:35.328285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:16.934 [2024-11-20 18:36:35.328294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:27:16.934 [2024-11-20 18:36:35.328302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.934 [2024-11-20 18:36:35.328375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.934 [2024-11-20 18:36:35.328390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:16.934 [2024-11-20 18:36:35.328399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:27:16.934 [2024-11-20 18:36:35.328407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.934 [2024-11-20 18:36:35.328433] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:16.934 [2024-11-20 18:36:35.332468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.934 [2024-11-20 18:36:35.332508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:16.934 [2024-11-20 18:36:35.332519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.042 ms 00:27:16.934 [2024-11-20 18:36:35.332527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.934 [2024-11-20 18:36:35.332562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.934 [2024-11-20 18:36:35.332571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:16.934 [2024-11-20 18:36:35.332580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:27:16.934 [2024-11-20 18:36:35.332588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.934 [2024-11-20 18:36:35.332643] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:16.934 [2024-11-20 18:36:35.332671] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:16.934 [2024-11-20 18:36:35.332708] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:16.934 [2024-11-20 18:36:35.332726] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:16.934 [2024-11-20 18:36:35.332832] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:16.934 [2024-11-20 18:36:35.332843] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:16.934 [2024-11-20 18:36:35.332855] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:16.934 [2024-11-20 18:36:35.332866] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:16.934 [2024-11-20 18:36:35.332878] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:16.934 [2024-11-20 18:36:35.332886] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:16.934 [2024-11-20 18:36:35.332894] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:16.934 [2024-11-20 18:36:35.332902] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:16.934 [2024-11-20 18:36:35.332909] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:16.934 [2024-11-20 18:36:35.332917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.934 [2024-11-20 18:36:35.332926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:16.934 [2024-11-20 18:36:35.332933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:27:16.934 [2024-11-20 18:36:35.332940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.934 [2024-11-20 18:36:35.333023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.934 [2024-11-20 18:36:35.333035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:16.934 [2024-11-20 18:36:35.333042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:27:16.934 [2024-11-20 18:36:35.333049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.934 [2024-11-20 18:36:35.333181] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:16.934 [2024-11-20 18:36:35.333194] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:16.934 [2024-11-20 18:36:35.333202] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:16.934 [2024-11-20 18:36:35.333210] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:16.934 [2024-11-20 18:36:35.333218] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:16.934 [2024-11-20 18:36:35.333225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:16.934 [2024-11-20 18:36:35.333233] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:16.934 [2024-11-20 18:36:35.333241] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:16.934 [2024-11-20 18:36:35.333248] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:16.934 [2024-11-20 18:36:35.333255] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:16.935 [2024-11-20 18:36:35.333262] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:16.935 [2024-11-20 18:36:35.333275] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:16.935 [2024-11-20 18:36:35.333282] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:16.935 [2024-11-20 18:36:35.333290] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:16.935 [2024-11-20 18:36:35.333298] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:16.935 [2024-11-20 18:36:35.333305] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:16.935 [2024-11-20 18:36:35.333312] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:16.935 [2024-11-20 18:36:35.333320] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:16.935 [2024-11-20 18:36:35.333326] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:16.935 [2024-11-20 18:36:35.333334] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:16.935 [2024-11-20 18:36:35.333341] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:16.935 [2024-11-20 18:36:35.333349] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:16.935 [2024-11-20 18:36:35.333356] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:16.935 [2024-11-20 18:36:35.333367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:16.935 [2024-11-20 18:36:35.333373] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:16.935 [2024-11-20 18:36:35.333380] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:16.935 [2024-11-20 18:36:35.333387] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:16.935 [2024-11-20 18:36:35.333393] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:16.935 [2024-11-20 18:36:35.333400] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:16.935 [2024-11-20 18:36:35.333407] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:16.935 [2024-11-20 18:36:35.333414] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:16.935 [2024-11-20 18:36:35.333420] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:16.935 [2024-11-20 18:36:35.333427] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:16.935 [2024-11-20 18:36:35.333434] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:16.935 [2024-11-20 18:36:35.333441] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:16.935 [2024-11-20 18:36:35.333448] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:16.935 [2024-11-20 18:36:35.333455] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:16.935 [2024-11-20 18:36:35.333462] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:16.935 [2024-11-20 18:36:35.333470] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:16.935 [2024-11-20 18:36:35.333476] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:16.935 [2024-11-20 18:36:35.333483] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:16.935 [2024-11-20 18:36:35.333490] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:16.935 [2024-11-20 18:36:35.333497] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:16.935 [2024-11-20 18:36:35.333503] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:16.935 [2024-11-20 18:36:35.333511] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:16.935 [2024-11-20 18:36:35.333520] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:16.935 [2024-11-20 18:36:35.333530] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:16.935 [2024-11-20 18:36:35.333538] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:16.935 [2024-11-20 18:36:35.333545] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:16.935 [2024-11-20 18:36:35.333551] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:16.935 [2024-11-20 18:36:35.333558] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:16.935 [2024-11-20 18:36:35.333564] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:16.935 [2024-11-20 18:36:35.333570] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:16.935 [2024-11-20 18:36:35.333580] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:16.935 [2024-11-20 18:36:35.333589] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:16.935 [2024-11-20 18:36:35.333598] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:16.935 [2024-11-20 18:36:35.333605] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:16.935 [2024-11-20 18:36:35.333612] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:16.935 [2024-11-20 18:36:35.333618] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:16.935 [2024-11-20 18:36:35.333625] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:16.935 [2024-11-20 18:36:35.333632] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:16.935 [2024-11-20 18:36:35.333640] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:16.935 [2024-11-20 18:36:35.333647] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:16.935 [2024-11-20 18:36:35.333653] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:16.935 [2024-11-20 18:36:35.333660] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:16.935 [2024-11-20 18:36:35.333667] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:16.935 [2024-11-20 18:36:35.333675] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:16.935 [2024-11-20 18:36:35.333682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:16.935 [2024-11-20 18:36:35.333689] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:16.935 [2024-11-20 18:36:35.333697] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:16.935 [2024-11-20 18:36:35.333706] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:16.935 [2024-11-20 18:36:35.333714] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:16.935 [2024-11-20 18:36:35.333721] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:16.935 [2024-11-20 18:36:35.333727] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:16.935 [2024-11-20 18:36:35.333735] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:16.935 [2024-11-20 18:36:35.333742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.935 [2024-11-20 18:36:35.333750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:16.935 [2024-11-20 18:36:35.333765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.656 ms 00:27:16.935 [2024-11-20 18:36:35.333772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.935 [2024-11-20 18:36:35.366281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.935 [2024-11-20 18:36:35.366473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:16.935 [2024-11-20 18:36:35.366494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.462 ms 00:27:16.935 [2024-11-20 18:36:35.366504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.935 [2024-11-20 18:36:35.366599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.935 [2024-11-20 18:36:35.366616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:16.935 [2024-11-20 18:36:35.366626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:27:16.935 [2024-11-20 18:36:35.366635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.935 [2024-11-20 18:36:35.409732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.935 [2024-11-20 18:36:35.409789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:16.935 [2024-11-20 18:36:35.409803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.035 ms 00:27:16.935 [2024-11-20 18:36:35.409816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.935 [2024-11-20 18:36:35.409869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.935 [2024-11-20 18:36:35.409879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:16.935 [2024-11-20 18:36:35.409889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:16.935 [2024-11-20 18:36:35.409897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.935 [2024-11-20 18:36:35.410570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.935 [2024-11-20 18:36:35.410596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:16.935 [2024-11-20 18:36:35.410608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.590 ms 00:27:16.935 [2024-11-20 18:36:35.410618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.935 [2024-11-20 18:36:35.410789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.935 [2024-11-20 18:36:35.410810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:16.935 [2024-11-20 18:36:35.410819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:27:16.935 [2024-11-20 18:36:35.410828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.935 [2024-11-20 18:36:35.426860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.935 [2024-11-20 18:36:35.427058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:16.935 [2024-11-20 18:36:35.427078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.009 ms 00:27:16.935 [2024-11-20 18:36:35.427086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.935 [2024-11-20 18:36:35.441498] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:16.936 [2024-11-20 18:36:35.441678] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:16.936 [2024-11-20 18:36:35.441698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.936 [2024-11-20 18:36:35.441707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:16.936 [2024-11-20 18:36:35.441718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.474 ms 00:27:16.936 [2024-11-20 18:36:35.441725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.936 [2024-11-20 18:36:35.468230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.936 [2024-11-20 18:36:35.468434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:16.936 [2024-11-20 18:36:35.468470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.458 ms 00:27:16.936 [2024-11-20 18:36:35.468478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.936 [2024-11-20 18:36:35.481473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.936 [2024-11-20 18:36:35.481524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:16.936 [2024-11-20 18:36:35.481536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.946 ms 00:27:16.936 [2024-11-20 18:36:35.481543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.936 [2024-11-20 18:36:35.493939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.936 [2024-11-20 18:36:35.493986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:16.936 [2024-11-20 18:36:35.493998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.346 ms 00:27:16.936 [2024-11-20 18:36:35.494005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.936 [2024-11-20 18:36:35.494682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.936 [2024-11-20 18:36:35.494716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:16.936 [2024-11-20 18:36:35.494727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.537 ms 00:27:16.936 [2024-11-20 18:36:35.494735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.196 [2024-11-20 18:36:35.560881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.197 [2024-11-20 18:36:35.560945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:17.197 [2024-11-20 18:36:35.560961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.125 ms 00:27:17.197 [2024-11-20 18:36:35.560971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.197 [2024-11-20 18:36:35.572587] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:17.197 [2024-11-20 18:36:35.575849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.197 [2024-11-20 18:36:35.575895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:17.197 [2024-11-20 18:36:35.575907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.816 ms 00:27:17.197 [2024-11-20 18:36:35.575916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.197 [2024-11-20 18:36:35.576014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.197 [2024-11-20 18:36:35.576026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:17.197 [2024-11-20 18:36:35.576036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:27:17.197 [2024-11-20 18:36:35.576045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.197 [2024-11-20 18:36:35.576142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.197 [2024-11-20 18:36:35.576155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:17.197 [2024-11-20 18:36:35.576164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:27:17.197 [2024-11-20 18:36:35.576173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.197 [2024-11-20 18:36:35.576194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.197 [2024-11-20 18:36:35.576207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:17.197 [2024-11-20 18:36:35.576217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:17.197 [2024-11-20 18:36:35.576225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.197 [2024-11-20 18:36:35.576262] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:17.197 [2024-11-20 18:36:35.576273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.197 [2024-11-20 18:36:35.576282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:17.197 [2024-11-20 18:36:35.576291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:27:17.197 [2024-11-20 18:36:35.576299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.197 [2024-11-20 18:36:35.602080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.197 [2024-11-20 18:36:35.602280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:17.197 [2024-11-20 18:36:35.602305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.758 ms 00:27:17.197 [2024-11-20 18:36:35.602313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.197 [2024-11-20 18:36:35.602402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.197 [2024-11-20 18:36:35.602412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:17.197 [2024-11-20 18:36:35.602421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:27:17.197 [2024-11-20 18:36:35.602429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.197 [2024-11-20 18:36:35.603831] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 301.803 ms, result 0 00:27:18.139  [2024-11-20T18:36:37.710Z] Copying: 11/1024 [MB] (11 MBps) [2024-11-20T18:36:38.655Z] Copying: 31/1024 [MB] (20 MBps) [2024-11-20T18:36:40.042Z] Copying: 46/1024 [MB] (14 MBps) [2024-11-20T18:36:40.617Z] Copying: 65/1024 [MB] (18 MBps) [2024-11-20T18:36:42.004Z] Copying: 83/1024 [MB] (17 MBps) [2024-11-20T18:36:42.949Z] Copying: 96/1024 [MB] (13 MBps) [2024-11-20T18:36:43.892Z] Copying: 106/1024 [MB] (10 MBps) [2024-11-20T18:36:44.834Z] Copying: 119/1024 [MB] (13 MBps) [2024-11-20T18:36:45.777Z] Copying: 138/1024 [MB] (18 MBps) [2024-11-20T18:36:46.719Z] Copying: 154/1024 [MB] (15 MBps) [2024-11-20T18:36:47.664Z] Copying: 170/1024 [MB] (16 MBps) [2024-11-20T18:36:49.049Z] Copying: 190/1024 [MB] (19 MBps) [2024-11-20T18:36:49.626Z] Copying: 207/1024 [MB] (17 MBps) [2024-11-20T18:36:51.014Z] Copying: 225/1024 [MB] (17 MBps) [2024-11-20T18:36:51.959Z] Copying: 243/1024 [MB] (17 MBps) [2024-11-20T18:36:52.901Z] Copying: 261/1024 [MB] (17 MBps) [2024-11-20T18:36:53.845Z] Copying: 279/1024 [MB] (18 MBps) [2024-11-20T18:36:54.789Z] Copying: 299/1024 [MB] (19 MBps) [2024-11-20T18:36:55.733Z] Copying: 319/1024 [MB] (19 MBps) [2024-11-20T18:36:56.677Z] Copying: 339/1024 [MB] (20 MBps) [2024-11-20T18:36:57.634Z] Copying: 357/1024 [MB] (17 MBps) [2024-11-20T18:36:59.020Z] Copying: 368/1024 [MB] (11 MBps) [2024-11-20T18:36:59.962Z] Copying: 379/1024 [MB] (10 MBps) [2024-11-20T18:37:00.905Z] Copying: 390/1024 [MB] (10 MBps) [2024-11-20T18:37:01.848Z] Copying: 401/1024 [MB] (11 MBps) [2024-11-20T18:37:02.792Z] Copying: 412/1024 [MB] (10 MBps) [2024-11-20T18:37:03.735Z] Copying: 423/1024 [MB] (10 MBps) [2024-11-20T18:37:04.678Z] Copying: 434/1024 [MB] (10 MBps) [2024-11-20T18:37:05.622Z] Copying: 444/1024 [MB] (10 MBps) [2024-11-20T18:37:07.007Z] Copying: 455/1024 [MB] (10 MBps) [2024-11-20T18:37:07.951Z] Copying: 471/1024 [MB] (15 MBps) [2024-11-20T18:37:08.898Z] Copying: 483/1024 [MB] (12 MBps) [2024-11-20T18:37:09.840Z] Copying: 494/1024 [MB] (10 MBps) [2024-11-20T18:37:10.780Z] Copying: 505/1024 [MB] (11 MBps) [2024-11-20T18:37:11.722Z] Copying: 523/1024 [MB] (17 MBps) [2024-11-20T18:37:12.664Z] Copying: 537/1024 [MB] (14 MBps) [2024-11-20T18:37:14.052Z] Copying: 551/1024 [MB] (13 MBps) [2024-11-20T18:37:14.623Z] Copying: 564/1024 [MB] (13 MBps) [2024-11-20T18:37:16.009Z] Copying: 582/1024 [MB] (17 MBps) [2024-11-20T18:37:16.953Z] Copying: 600/1024 [MB] (17 MBps) [2024-11-20T18:37:17.897Z] Copying: 611/1024 [MB] (11 MBps) [2024-11-20T18:37:18.885Z] Copying: 625/1024 [MB] (14 MBps) [2024-11-20T18:37:19.873Z] Copying: 641/1024 [MB] (15 MBps) [2024-11-20T18:37:20.817Z] Copying: 660/1024 [MB] (19 MBps) [2024-11-20T18:37:21.763Z] Copying: 672/1024 [MB] (11 MBps) [2024-11-20T18:37:22.709Z] Copying: 690/1024 [MB] (18 MBps) [2024-11-20T18:37:23.652Z] Copying: 709/1024 [MB] (18 MBps) [2024-11-20T18:37:25.040Z] Copying: 724/1024 [MB] (14 MBps) [2024-11-20T18:37:25.984Z] Copying: 737/1024 [MB] (13 MBps) [2024-11-20T18:37:26.921Z] Copying: 749/1024 [MB] (12 MBps) [2024-11-20T18:37:27.860Z] Copying: 778/1024 [MB] (28 MBps) [2024-11-20T18:37:28.803Z] Copying: 800/1024 [MB] (21 MBps) [2024-11-20T18:37:29.748Z] Copying: 817/1024 [MB] (17 MBps) [2024-11-20T18:37:30.688Z] Copying: 829/1024 [MB] (11 MBps) [2024-11-20T18:37:31.629Z] Copying: 840/1024 [MB] (11 MBps) [2024-11-20T18:37:33.018Z] Copying: 852/1024 [MB] (11 MBps) [2024-11-20T18:37:33.963Z] Copying: 864/1024 [MB] (12 MBps) [2024-11-20T18:37:34.906Z] Copying: 874/1024 [MB] (10 MBps) [2024-11-20T18:37:35.846Z] Copying: 885/1024 [MB] (10 MBps) [2024-11-20T18:37:36.791Z] Copying: 901/1024 [MB] (15 MBps) [2024-11-20T18:37:37.734Z] Copying: 917/1024 [MB] (15 MBps) [2024-11-20T18:37:38.675Z] Copying: 927/1024 [MB] (10 MBps) [2024-11-20T18:37:40.051Z] Copying: 938/1024 [MB] (10 MBps) [2024-11-20T18:37:40.624Z] Copying: 972/1024 [MB] (34 MBps) [2024-11-20T18:37:42.009Z] Copying: 992/1024 [MB] (20 MBps) [2024-11-20T18:37:42.954Z] Copying: 1003/1024 [MB] (11 MBps) [2024-11-20T18:37:42.954Z] Copying: 1024/1024 [MB] (20 MBps) [2024-11-20T18:37:42.954Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-20 18:37:42.615964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.325 [2024-11-20 18:37:42.616022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:24.325 [2024-11-20 18:37:42.616039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:24.325 [2024-11-20 18:37:42.616048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.325 [2024-11-20 18:37:42.616072] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:24.325 [2024-11-20 18:37:42.619118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.325 [2024-11-20 18:37:42.619154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:24.325 [2024-11-20 18:37:42.619167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.029 ms 00:28:24.325 [2024-11-20 18:37:42.619176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.325 [2024-11-20 18:37:42.622310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.325 [2024-11-20 18:37:42.622366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:24.325 [2024-11-20 18:37:42.622378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.108 ms 00:28:24.325 [2024-11-20 18:37:42.622386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.325 [2024-11-20 18:37:42.641201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.325 [2024-11-20 18:37:42.641400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:24.325 [2024-11-20 18:37:42.641422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.797 ms 00:28:24.325 [2024-11-20 18:37:42.641432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.325 [2024-11-20 18:37:42.647615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.325 [2024-11-20 18:37:42.647654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:24.325 [2024-11-20 18:37:42.647679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.145 ms 00:28:24.325 [2024-11-20 18:37:42.647687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.325 [2024-11-20 18:37:42.674100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.325 [2024-11-20 18:37:42.674144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:24.325 [2024-11-20 18:37:42.674156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.339 ms 00:28:24.325 [2024-11-20 18:37:42.674164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.325 [2024-11-20 18:37:42.690339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.325 [2024-11-20 18:37:42.690534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:24.325 [2024-11-20 18:37:42.690556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.129 ms 00:28:24.325 [2024-11-20 18:37:42.690565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.325 [2024-11-20 18:37:42.693884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.325 [2024-11-20 18:37:42.694044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:24.325 [2024-11-20 18:37:42.694064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.234 ms 00:28:24.325 [2024-11-20 18:37:42.694080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.325 [2024-11-20 18:37:42.720028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.325 [2024-11-20 18:37:42.720073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:24.325 [2024-11-20 18:37:42.720086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.905 ms 00:28:24.325 [2024-11-20 18:37:42.720117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.325 [2024-11-20 18:37:42.745293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.325 [2024-11-20 18:37:42.745336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:24.325 [2024-11-20 18:37:42.745347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.129 ms 00:28:24.325 [2024-11-20 18:37:42.745354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.325 [2024-11-20 18:37:42.770460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.325 [2024-11-20 18:37:42.770502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:24.325 [2024-11-20 18:37:42.770513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.062 ms 00:28:24.325 [2024-11-20 18:37:42.770521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.325 [2024-11-20 18:37:42.795272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.325 [2024-11-20 18:37:42.795314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:24.325 [2024-11-20 18:37:42.795325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.680 ms 00:28:24.325 [2024-11-20 18:37:42.795333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.325 [2024-11-20 18:37:42.795377] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:24.325 [2024-11-20 18:37:42.795393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 1024 / 261120 wr_cnt: 1 state: open 00:28:24.325 [2024-11-20 18:37:42.795404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:24.325 [2024-11-20 18:37:42.795412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:24.325 [2024-11-20 18:37:42.795420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:24.325 [2024-11-20 18:37:42.795428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:24.325 [2024-11-20 18:37:42.795436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:24.325 [2024-11-20 18:37:42.795444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:24.325 [2024-11-20 18:37:42.795451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:24.325 [2024-11-20 18:37:42.795459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:24.325 [2024-11-20 18:37:42.795467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:24.325 [2024-11-20 18:37:42.795474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:24.325 [2024-11-20 18:37:42.795482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:24.325 [2024-11-20 18:37:42.795490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:24.325 [2024-11-20 18:37:42.795499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:24.325 [2024-11-20 18:37:42.795506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:24.325 [2024-11-20 18:37:42.795515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:24.326 [2024-11-20 18:37:42.795523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:24.326 [2024-11-20 18:37:42.795532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:24.326 [2024-11-20 18:37:42.795541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:24.326 [2024-11-20 18:37:42.795549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:24.326 [2024-11-20 18:37:42.795557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:24.326 [2024-11-20 18:37:42.795564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:24.326 [2024-11-20 18:37:42.795572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:24.326 [2024-11-20 18:37:42.795579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:24.326 [2024-11-20 18:37:42.795587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:24.326 [2024-11-20 18:37:42.795594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:24.326 [2024-11-20 18:37:42.795604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:24.326 [2024-11-20 18:37:42.795612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:24.326 [2024-11-20 18:37:42.795620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:24.326 [2024-11-20 18:37:42.795630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:24.326 [2024-11-20 18:37:42.795638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:24.326 [2024-11-20 18:37:42.795645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:24.326 [2024-11-20 18:37:42.795653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:24.326 [2024-11-20 18:37:42.795660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:24.326 [2024-11-20 18:37:42.795667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:24.326 [2024-11-20 18:37:42.795675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:24.326 [2024-11-20 18:37:42.795683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:24.326 [2024-11-20 18:37:42.795690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:24.326 [2024-11-20 18:37:42.795721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:24.326 [2024-11-20 18:37:42.795730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:24.326 [2024-11-20 18:37:42.795737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:24.326 [2024-11-20 18:37:42.795744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:24.326 [2024-11-20 18:37:42.795752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:24.326 [2024-11-20 18:37:42.795759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:24.326 [2024-11-20 18:37:42.795767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:24.326 [2024-11-20 18:37:42.795774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:24.326 [2024-11-20 18:37:42.795781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:24.326 [2024-11-20 18:37:42.795790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:24.326 [2024-11-20 18:37:42.795798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:24.326 [2024-11-20 18:37:42.795805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:24.326 [2024-11-20 18:37:42.795812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:24.326 [2024-11-20 18:37:42.795820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:24.326 [2024-11-20 18:37:42.795828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:24.326 [2024-11-20 18:37:42.795835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:24.326 [2024-11-20 18:37:42.795843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:24.326 [2024-11-20 18:37:42.795851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:24.326 [2024-11-20 18:37:42.795859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:24.326 [2024-11-20 18:37:42.795868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:24.326 [2024-11-20 18:37:42.795877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:24.326 [2024-11-20 18:37:42.795884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:24.326 [2024-11-20 18:37:42.795892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:24.326 [2024-11-20 18:37:42.795900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:24.326 [2024-11-20 18:37:42.795907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:24.326 [2024-11-20 18:37:42.795915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:24.326 [2024-11-20 18:37:42.795924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:24.326 [2024-11-20 18:37:42.795932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:24.326 [2024-11-20 18:37:42.795940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:24.326 [2024-11-20 18:37:42.795947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:24.326 [2024-11-20 18:37:42.795954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:24.326 [2024-11-20 18:37:42.795962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:24.326 [2024-11-20 18:37:42.795969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:24.326 [2024-11-20 18:37:42.795977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:24.326 [2024-11-20 18:37:42.795984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:24.326 [2024-11-20 18:37:42.795991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:24.326 [2024-11-20 18:37:42.795999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:24.326 [2024-11-20 18:37:42.796007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:24.326 [2024-11-20 18:37:42.796015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:24.326 [2024-11-20 18:37:42.796023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:24.326 [2024-11-20 18:37:42.796030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:24.326 [2024-11-20 18:37:42.796037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:24.326 [2024-11-20 18:37:42.796045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:24.326 [2024-11-20 18:37:42.796052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:24.326 [2024-11-20 18:37:42.796059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:24.326 [2024-11-20 18:37:42.796067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:24.326 [2024-11-20 18:37:42.796075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:24.326 [2024-11-20 18:37:42.796083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:24.326 [2024-11-20 18:37:42.796091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:24.326 [2024-11-20 18:37:42.796126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:24.326 [2024-11-20 18:37:42.796134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:24.326 [2024-11-20 18:37:42.796142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:24.326 [2024-11-20 18:37:42.796149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:24.326 [2024-11-20 18:37:42.796157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:24.326 [2024-11-20 18:37:42.796164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:24.326 [2024-11-20 18:37:42.796171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:24.326 [2024-11-20 18:37:42.796179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:24.326 [2024-11-20 18:37:42.796187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:24.326 [2024-11-20 18:37:42.796195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:24.326 [2024-11-20 18:37:42.796203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:24.326 [2024-11-20 18:37:42.796212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:24.326 [2024-11-20 18:37:42.796219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:24.326 [2024-11-20 18:37:42.796237] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:24.326 [2024-11-20 18:37:42.796245] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0be3887f-60af-4a3e-bbbe-2f44c2e518c4 00:28:24.326 [2024-11-20 18:37:42.796253] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 1024 00:28:24.326 [2024-11-20 18:37:42.796261] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 1984 00:28:24.326 [2024-11-20 18:37:42.796281] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 1024 00:28:24.327 [2024-11-20 18:37:42.796289] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.9375 00:28:24.327 [2024-11-20 18:37:42.796297] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:24.327 [2024-11-20 18:37:42.796304] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:24.327 [2024-11-20 18:37:42.796312] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:24.327 [2024-11-20 18:37:42.796318] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:24.327 [2024-11-20 18:37:42.796324] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:24.327 [2024-11-20 18:37:42.796332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.327 [2024-11-20 18:37:42.796342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:24.327 [2024-11-20 18:37:42.796352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.956 ms 00:28:24.327 [2024-11-20 18:37:42.796371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.327 [2024-11-20 18:37:42.810013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.327 [2024-11-20 18:37:42.810052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:24.327 [2024-11-20 18:37:42.810063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.620 ms 00:28:24.327 [2024-11-20 18:37:42.810072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.327 [2024-11-20 18:37:42.810498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.327 [2024-11-20 18:37:42.810511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:24.327 [2024-11-20 18:37:42.810522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.374 ms 00:28:24.327 [2024-11-20 18:37:42.810531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.327 [2024-11-20 18:37:42.846560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:24.327 [2024-11-20 18:37:42.846603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:24.327 [2024-11-20 18:37:42.846616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:24.327 [2024-11-20 18:37:42.846624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.327 [2024-11-20 18:37:42.846689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:24.327 [2024-11-20 18:37:42.846699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:24.327 [2024-11-20 18:37:42.846708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:24.327 [2024-11-20 18:37:42.846717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.327 [2024-11-20 18:37:42.846811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:24.327 [2024-11-20 18:37:42.846824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:24.327 [2024-11-20 18:37:42.846834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:24.327 [2024-11-20 18:37:42.846843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.327 [2024-11-20 18:37:42.846859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:24.327 [2024-11-20 18:37:42.846868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:24.327 [2024-11-20 18:37:42.846876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:24.327 [2024-11-20 18:37:42.846883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.327 [2024-11-20 18:37:42.931642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:24.327 [2024-11-20 18:37:42.931693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:24.327 [2024-11-20 18:37:42.931718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:24.327 [2024-11-20 18:37:42.931727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.588 [2024-11-20 18:37:43.002022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:24.588 [2024-11-20 18:37:43.002073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:24.588 [2024-11-20 18:37:43.002085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:24.588 [2024-11-20 18:37:43.002112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.588 [2024-11-20 18:37:43.002174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:24.588 [2024-11-20 18:37:43.002190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:24.588 [2024-11-20 18:37:43.002199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:24.588 [2024-11-20 18:37:43.002207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.588 [2024-11-20 18:37:43.002271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:24.588 [2024-11-20 18:37:43.002284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:24.588 [2024-11-20 18:37:43.002293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:24.588 [2024-11-20 18:37:43.002304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.588 [2024-11-20 18:37:43.002402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:24.588 [2024-11-20 18:37:43.002416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:24.588 [2024-11-20 18:37:43.002429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:24.588 [2024-11-20 18:37:43.002437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.588 [2024-11-20 18:37:43.002469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:24.588 [2024-11-20 18:37:43.002481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:24.588 [2024-11-20 18:37:43.002489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:24.588 [2024-11-20 18:37:43.002497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.588 [2024-11-20 18:37:43.002538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:24.588 [2024-11-20 18:37:43.002549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:24.588 [2024-11-20 18:37:43.002564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:24.588 [2024-11-20 18:37:43.002572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.588 [2024-11-20 18:37:43.002621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:24.588 [2024-11-20 18:37:43.002634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:24.588 [2024-11-20 18:37:43.002642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:24.588 [2024-11-20 18:37:43.002652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.588 [2024-11-20 18:37:43.002791] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 386.793 ms, result 0 00:28:25.973 00:28:25.974 00:28:25.974 18:37:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:28:27.888 18:37:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:27.888 [2024-11-20 18:37:46.443474] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:28:27.888 [2024-11-20 18:37:46.443562] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81826 ] 00:28:28.150 [2024-11-20 18:37:46.598648] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:28.150 [2024-11-20 18:37:46.709278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:28.411 [2024-11-20 18:37:46.998370] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:28.411 [2024-11-20 18:37:46.998450] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:28.673 [2024-11-20 18:37:47.159985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.673 [2024-11-20 18:37:47.160259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:28.673 [2024-11-20 18:37:47.160291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:28.673 [2024-11-20 18:37:47.160301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.673 [2024-11-20 18:37:47.160371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.673 [2024-11-20 18:37:47.160382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:28.673 [2024-11-20 18:37:47.160395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:28:28.673 [2024-11-20 18:37:47.160403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.673 [2024-11-20 18:37:47.160425] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:28.673 [2024-11-20 18:37:47.161202] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:28.673 [2024-11-20 18:37:47.161226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.673 [2024-11-20 18:37:47.161236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:28.673 [2024-11-20 18:37:47.161246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.807 ms 00:28:28.673 [2024-11-20 18:37:47.161253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.673 [2024-11-20 18:37:47.163009] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:28.673 [2024-11-20 18:37:47.176885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.673 [2024-11-20 18:37:47.176933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:28.673 [2024-11-20 18:37:47.176948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.878 ms 00:28:28.673 [2024-11-20 18:37:47.176956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.673 [2024-11-20 18:37:47.177033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.673 [2024-11-20 18:37:47.177043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:28.673 [2024-11-20 18:37:47.177052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:28:28.673 [2024-11-20 18:37:47.177060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.673 [2024-11-20 18:37:47.185315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.673 [2024-11-20 18:37:47.185355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:28.673 [2024-11-20 18:37:47.185367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.154 ms 00:28:28.673 [2024-11-20 18:37:47.185375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.673 [2024-11-20 18:37:47.185461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.673 [2024-11-20 18:37:47.185471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:28.673 [2024-11-20 18:37:47.185480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:28:28.673 [2024-11-20 18:37:47.185489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.673 [2024-11-20 18:37:47.185536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.673 [2024-11-20 18:37:47.185547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:28.673 [2024-11-20 18:37:47.185556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:28:28.673 [2024-11-20 18:37:47.185566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.673 [2024-11-20 18:37:47.185592] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:28.673 [2024-11-20 18:37:47.189660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.673 [2024-11-20 18:37:47.189697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:28.673 [2024-11-20 18:37:47.189709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.076 ms 00:28:28.673 [2024-11-20 18:37:47.189720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.673 [2024-11-20 18:37:47.189754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.674 [2024-11-20 18:37:47.189763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:28.674 [2024-11-20 18:37:47.189772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:28:28.674 [2024-11-20 18:37:47.189780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.674 [2024-11-20 18:37:47.189830] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:28.674 [2024-11-20 18:37:47.189854] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:28.674 [2024-11-20 18:37:47.189892] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:28.674 [2024-11-20 18:37:47.189914] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:28.674 [2024-11-20 18:37:47.190020] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:28.674 [2024-11-20 18:37:47.190034] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:28.674 [2024-11-20 18:37:47.190045] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:28.674 [2024-11-20 18:37:47.190056] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:28.674 [2024-11-20 18:37:47.190067] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:28.674 [2024-11-20 18:37:47.190077] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:28.674 [2024-11-20 18:37:47.190086] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:28.674 [2024-11-20 18:37:47.190117] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:28.674 [2024-11-20 18:37:47.190126] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:28.674 [2024-11-20 18:37:47.190139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.674 [2024-11-20 18:37:47.190149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:28.674 [2024-11-20 18:37:47.190158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.311 ms 00:28:28.674 [2024-11-20 18:37:47.190166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.674 [2024-11-20 18:37:47.190251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.674 [2024-11-20 18:37:47.190262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:28.674 [2024-11-20 18:37:47.190271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:28:28.674 [2024-11-20 18:37:47.190278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.674 [2024-11-20 18:37:47.190382] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:28.674 [2024-11-20 18:37:47.190396] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:28.674 [2024-11-20 18:37:47.190406] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:28.674 [2024-11-20 18:37:47.190415] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:28.674 [2024-11-20 18:37:47.190424] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:28.674 [2024-11-20 18:37:47.190433] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:28.674 [2024-11-20 18:37:47.190441] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:28.674 [2024-11-20 18:37:47.190450] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:28.674 [2024-11-20 18:37:47.190458] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:28.674 [2024-11-20 18:37:47.190465] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:28.674 [2024-11-20 18:37:47.190472] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:28.674 [2024-11-20 18:37:47.190480] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:28.674 [2024-11-20 18:37:47.190487] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:28.674 [2024-11-20 18:37:47.190496] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:28.674 [2024-11-20 18:37:47.190504] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:28.674 [2024-11-20 18:37:47.190517] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:28.674 [2024-11-20 18:37:47.190524] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:28.674 [2024-11-20 18:37:47.190531] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:28.674 [2024-11-20 18:37:47.190537] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:28.674 [2024-11-20 18:37:47.190546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:28.674 [2024-11-20 18:37:47.190554] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:28.674 [2024-11-20 18:37:47.190561] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:28.674 [2024-11-20 18:37:47.190567] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:28.674 [2024-11-20 18:37:47.190574] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:28.674 [2024-11-20 18:37:47.190580] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:28.674 [2024-11-20 18:37:47.190587] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:28.674 [2024-11-20 18:37:47.190594] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:28.674 [2024-11-20 18:37:47.190601] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:28.674 [2024-11-20 18:37:47.190607] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:28.674 [2024-11-20 18:37:47.190613] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:28.674 [2024-11-20 18:37:47.190620] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:28.674 [2024-11-20 18:37:47.190627] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:28.674 [2024-11-20 18:37:47.190633] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:28.674 [2024-11-20 18:37:47.190640] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:28.674 [2024-11-20 18:37:47.190646] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:28.674 [2024-11-20 18:37:47.190653] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:28.674 [2024-11-20 18:37:47.190661] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:28.674 [2024-11-20 18:37:47.190667] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:28.674 [2024-11-20 18:37:47.190674] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:28.674 [2024-11-20 18:37:47.190680] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:28.674 [2024-11-20 18:37:47.190687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:28.674 [2024-11-20 18:37:47.190693] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:28.674 [2024-11-20 18:37:47.190700] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:28.674 [2024-11-20 18:37:47.190706] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:28.674 [2024-11-20 18:37:47.190714] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:28.674 [2024-11-20 18:37:47.190725] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:28.674 [2024-11-20 18:37:47.190734] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:28.674 [2024-11-20 18:37:47.190741] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:28.674 [2024-11-20 18:37:47.190748] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:28.674 [2024-11-20 18:37:47.190755] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:28.674 [2024-11-20 18:37:47.190762] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:28.674 [2024-11-20 18:37:47.190771] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:28.674 [2024-11-20 18:37:47.190778] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:28.674 [2024-11-20 18:37:47.190787] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:28.674 [2024-11-20 18:37:47.190796] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:28.674 [2024-11-20 18:37:47.190804] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:28.674 [2024-11-20 18:37:47.190812] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:28.674 [2024-11-20 18:37:47.190820] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:28.674 [2024-11-20 18:37:47.190828] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:28.674 [2024-11-20 18:37:47.190835] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:28.674 [2024-11-20 18:37:47.190842] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:28.674 [2024-11-20 18:37:47.190849] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:28.674 [2024-11-20 18:37:47.190856] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:28.674 [2024-11-20 18:37:47.190862] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:28.674 [2024-11-20 18:37:47.190870] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:28.674 [2024-11-20 18:37:47.190876] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:28.674 [2024-11-20 18:37:47.190885] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:28.674 [2024-11-20 18:37:47.190893] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:28.674 [2024-11-20 18:37:47.190899] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:28.674 [2024-11-20 18:37:47.190906] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:28.674 [2024-11-20 18:37:47.190917] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:28.674 [2024-11-20 18:37:47.190926] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:28.674 [2024-11-20 18:37:47.190935] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:28.675 [2024-11-20 18:37:47.190944] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:28.675 [2024-11-20 18:37:47.190952] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:28.675 [2024-11-20 18:37:47.190959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.675 [2024-11-20 18:37:47.190967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:28.675 [2024-11-20 18:37:47.190975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.646 ms 00:28:28.675 [2024-11-20 18:37:47.190983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.675 [2024-11-20 18:37:47.223004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.675 [2024-11-20 18:37:47.223206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:28.675 [2024-11-20 18:37:47.223226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.976 ms 00:28:28.675 [2024-11-20 18:37:47.223235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.675 [2024-11-20 18:37:47.223331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.675 [2024-11-20 18:37:47.223340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:28.675 [2024-11-20 18:37:47.223349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:28:28.675 [2024-11-20 18:37:47.223357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.675 [2024-11-20 18:37:47.269792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.675 [2024-11-20 18:37:47.269844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:28.675 [2024-11-20 18:37:47.269858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.372 ms 00:28:28.675 [2024-11-20 18:37:47.269867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.675 [2024-11-20 18:37:47.269916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.675 [2024-11-20 18:37:47.269927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:28.675 [2024-11-20 18:37:47.269936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:28.675 [2024-11-20 18:37:47.269948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.675 [2024-11-20 18:37:47.270537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.675 [2024-11-20 18:37:47.270560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:28.675 [2024-11-20 18:37:47.270571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.512 ms 00:28:28.675 [2024-11-20 18:37:47.270582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.675 [2024-11-20 18:37:47.270737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.675 [2024-11-20 18:37:47.270758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:28.675 [2024-11-20 18:37:47.270768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:28:28.675 [2024-11-20 18:37:47.270783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.675 [2024-11-20 18:37:47.286479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.675 [2024-11-20 18:37:47.286522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:28.675 [2024-11-20 18:37:47.286536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.675 ms 00:28:28.675 [2024-11-20 18:37:47.286544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.946 [2024-11-20 18:37:47.300770] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 3, empty chunks = 1 00:28:28.946 [2024-11-20 18:37:47.300965] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:28.947 [2024-11-20 18:37:47.300985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.947 [2024-11-20 18:37:47.300996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:28.947 [2024-11-20 18:37:47.301007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.334 ms 00:28:28.947 [2024-11-20 18:37:47.301014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.947 [2024-11-20 18:37:47.326946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.947 [2024-11-20 18:37:47.326999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:28.947 [2024-11-20 18:37:47.327012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.887 ms 00:28:28.947 [2024-11-20 18:37:47.327021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.947 [2024-11-20 18:37:47.339599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.947 [2024-11-20 18:37:47.339672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:28.947 [2024-11-20 18:37:47.339686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.525 ms 00:28:28.947 [2024-11-20 18:37:47.339693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.947 [2024-11-20 18:37:47.352174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.947 [2024-11-20 18:37:47.352216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:28.947 [2024-11-20 18:37:47.352227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.435 ms 00:28:28.947 [2024-11-20 18:37:47.352235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.947 [2024-11-20 18:37:47.352878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.947 [2024-11-20 18:37:47.352906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:28.947 [2024-11-20 18:37:47.352917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.537 ms 00:28:28.947 [2024-11-20 18:37:47.352928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.947 [2024-11-20 18:37:47.417509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.947 [2024-11-20 18:37:47.417571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:28.947 [2024-11-20 18:37:47.417594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.562 ms 00:28:28.947 [2024-11-20 18:37:47.417604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.947 [2024-11-20 18:37:47.429009] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:28.947 [2024-11-20 18:37:47.431990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.947 [2024-11-20 18:37:47.432032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:28.947 [2024-11-20 18:37:47.432043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.330 ms 00:28:28.947 [2024-11-20 18:37:47.432052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.947 [2024-11-20 18:37:47.432159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.947 [2024-11-20 18:37:47.432173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:28.947 [2024-11-20 18:37:47.432183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:28:28.947 [2024-11-20 18:37:47.432197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.947 [2024-11-20 18:37:47.432976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.947 [2024-11-20 18:37:47.433014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:28.947 [2024-11-20 18:37:47.433026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.740 ms 00:28:28.947 [2024-11-20 18:37:47.433035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.947 [2024-11-20 18:37:47.433063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.947 [2024-11-20 18:37:47.433072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:28.947 [2024-11-20 18:37:47.433082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:28.947 [2024-11-20 18:37:47.433090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.947 [2024-11-20 18:37:47.433148] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:28.947 [2024-11-20 18:37:47.433163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.947 [2024-11-20 18:37:47.433172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:28.947 [2024-11-20 18:37:47.433182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:28:28.947 [2024-11-20 18:37:47.433191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.947 [2024-11-20 18:37:47.458535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.947 [2024-11-20 18:37:47.458582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:28.947 [2024-11-20 18:37:47.458596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.323 ms 00:28:28.947 [2024-11-20 18:37:47.458611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.947 [2024-11-20 18:37:47.458698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.947 [2024-11-20 18:37:47.458708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:28.947 [2024-11-20 18:37:47.458718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:28:28.947 [2024-11-20 18:37:47.458726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.947 [2024-11-20 18:37:47.460336] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 299.843 ms, result 0 00:28:30.391  [2024-11-20T18:37:49.957Z] Copying: 1012/1048576 [kB] (1012 kBps) [2024-11-20T18:37:50.902Z] Copying: 2240/1048576 [kB] (1228 kBps) [2024-11-20T18:37:51.847Z] Copying: 5896/1048576 [kB] (3656 kBps) [2024-11-20T18:37:52.790Z] Copying: 31/1024 [MB] (25 MBps) [2024-11-20T18:37:53.728Z] Copying: 59/1024 [MB] (28 MBps) [2024-11-20T18:37:54.670Z] Copying: 93/1024 [MB] (33 MBps) [2024-11-20T18:37:56.056Z] Copying: 124/1024 [MB] (31 MBps) [2024-11-20T18:37:56.992Z] Copying: 147/1024 [MB] (22 MBps) [2024-11-20T18:37:57.931Z] Copying: 174/1024 [MB] (27 MBps) [2024-11-20T18:37:58.868Z] Copying: 210/1024 [MB] (36 MBps) [2024-11-20T18:37:59.802Z] Copying: 235/1024 [MB] (24 MBps) [2024-11-20T18:38:00.739Z] Copying: 261/1024 [MB] (26 MBps) [2024-11-20T18:38:01.678Z] Copying: 285/1024 [MB] (23 MBps) [2024-11-20T18:38:03.060Z] Copying: 318/1024 [MB] (33 MBps) [2024-11-20T18:38:04.004Z] Copying: 347/1024 [MB] (28 MBps) [2024-11-20T18:38:04.950Z] Copying: 372/1024 [MB] (25 MBps) [2024-11-20T18:38:05.893Z] Copying: 394/1024 [MB] (22 MBps) [2024-11-20T18:38:06.834Z] Copying: 415/1024 [MB] (20 MBps) [2024-11-20T18:38:07.774Z] Copying: 431/1024 [MB] (15 MBps) [2024-11-20T18:38:08.718Z] Copying: 455/1024 [MB] (24 MBps) [2024-11-20T18:38:09.671Z] Copying: 471/1024 [MB] (15 MBps) [2024-11-20T18:38:11.055Z] Copying: 489/1024 [MB] (18 MBps) [2024-11-20T18:38:11.998Z] Copying: 505/1024 [MB] (15 MBps) [2024-11-20T18:38:12.943Z] Copying: 524/1024 [MB] (19 MBps) [2024-11-20T18:38:13.887Z] Copying: 540/1024 [MB] (15 MBps) [2024-11-20T18:38:14.831Z] Copying: 566/1024 [MB] (26 MBps) [2024-11-20T18:38:15.776Z] Copying: 582/1024 [MB] (16 MBps) [2024-11-20T18:38:16.718Z] Copying: 598/1024 [MB] (15 MBps) [2024-11-20T18:38:17.661Z] Copying: 623/1024 [MB] (25 MBps) [2024-11-20T18:38:19.043Z] Copying: 639/1024 [MB] (15 MBps) [2024-11-20T18:38:19.985Z] Copying: 669/1024 [MB] (29 MBps) [2024-11-20T18:38:20.929Z] Copying: 694/1024 [MB] (24 MBps) [2024-11-20T18:38:21.866Z] Copying: 718/1024 [MB] (24 MBps) [2024-11-20T18:38:22.813Z] Copying: 752/1024 [MB] (33 MBps) [2024-11-20T18:38:23.755Z] Copying: 782/1024 [MB] (30 MBps) [2024-11-20T18:38:24.699Z] Copying: 808/1024 [MB] (25 MBps) [2024-11-20T18:38:25.734Z] Copying: 837/1024 [MB] (28 MBps) [2024-11-20T18:38:26.677Z] Copying: 863/1024 [MB] (26 MBps) [2024-11-20T18:38:28.063Z] Copying: 891/1024 [MB] (28 MBps) [2024-11-20T18:38:29.004Z] Copying: 922/1024 [MB] (30 MBps) [2024-11-20T18:38:29.945Z] Copying: 958/1024 [MB] (36 MBps) [2024-11-20T18:38:30.888Z] Copying: 978/1024 [MB] (20 MBps) [2024-11-20T18:38:30.888Z] Copying: 1017/1024 [MB] (38 MBps) [2024-11-20T18:38:31.149Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-11-20 18:38:31.137712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.520 [2024-11-20 18:38:31.137794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:12.520 [2024-11-20 18:38:31.137813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:12.520 [2024-11-20 18:38:31.137837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.520 [2024-11-20 18:38:31.137868] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:12.520 [2024-11-20 18:38:31.142159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.520 [2024-11-20 18:38:31.142217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:12.520 [2024-11-20 18:38:31.142232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.271 ms 00:29:12.520 [2024-11-20 18:38:31.142243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.520 [2024-11-20 18:38:31.142554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.520 [2024-11-20 18:38:31.142583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:12.520 [2024-11-20 18:38:31.142599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:29:12.520 [2024-11-20 18:38:31.142616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.784 [2024-11-20 18:38:31.161684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.784 [2024-11-20 18:38:31.161738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:12.784 [2024-11-20 18:38:31.161752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.043 ms 00:29:12.784 [2024-11-20 18:38:31.161761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.784 [2024-11-20 18:38:31.168048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.784 [2024-11-20 18:38:31.168111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:12.784 [2024-11-20 18:38:31.168124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.255 ms 00:29:12.784 [2024-11-20 18:38:31.168134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.784 [2024-11-20 18:38:31.195451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.784 [2024-11-20 18:38:31.195499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:12.784 [2024-11-20 18:38:31.195513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.260 ms 00:29:12.784 [2024-11-20 18:38:31.195521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.784 [2024-11-20 18:38:31.211124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.784 [2024-11-20 18:38:31.211172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:12.784 [2024-11-20 18:38:31.211185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.552 ms 00:29:12.784 [2024-11-20 18:38:31.211194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.784 [2024-11-20 18:38:31.216206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.784 [2024-11-20 18:38:31.216254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:12.784 [2024-11-20 18:38:31.216266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.956 ms 00:29:12.784 [2024-11-20 18:38:31.216275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.784 [2024-11-20 18:38:31.242125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.784 [2024-11-20 18:38:31.242171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:12.784 [2024-11-20 18:38:31.242183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.832 ms 00:29:12.784 [2024-11-20 18:38:31.242192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.784 [2024-11-20 18:38:31.267696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.784 [2024-11-20 18:38:31.267742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:12.784 [2024-11-20 18:38:31.267766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.456 ms 00:29:12.784 [2024-11-20 18:38:31.267774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.784 [2024-11-20 18:38:31.292746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.784 [2024-11-20 18:38:31.292789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:12.784 [2024-11-20 18:38:31.292801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.924 ms 00:29:12.784 [2024-11-20 18:38:31.292809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.784 [2024-11-20 18:38:31.317738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.784 [2024-11-20 18:38:31.317782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:12.784 [2024-11-20 18:38:31.317795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.853 ms 00:29:12.784 [2024-11-20 18:38:31.317802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.784 [2024-11-20 18:38:31.317849] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:12.784 [2024-11-20 18:38:31.317865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:12.784 [2024-11-20 18:38:31.317877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:29:12.784 [2024-11-20 18:38:31.317886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:12.784 [2024-11-20 18:38:31.317894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:12.784 [2024-11-20 18:38:31.317902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:12.784 [2024-11-20 18:38:31.317911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:12.784 [2024-11-20 18:38:31.317919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:12.784 [2024-11-20 18:38:31.317928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:12.784 [2024-11-20 18:38:31.317936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:12.784 [2024-11-20 18:38:31.317944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:12.784 [2024-11-20 18:38:31.317953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:12.784 [2024-11-20 18:38:31.317962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:12.784 [2024-11-20 18:38:31.317971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:12.784 [2024-11-20 18:38:31.317979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:12.784 [2024-11-20 18:38:31.317986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:12.784 [2024-11-20 18:38:31.317995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:12.784 [2024-11-20 18:38:31.318004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:12.784 [2024-11-20 18:38:31.318012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:12.784 [2024-11-20 18:38:31.318020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:12.784 [2024-11-20 18:38:31.318028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:12.784 [2024-11-20 18:38:31.318036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:12.784 [2024-11-20 18:38:31.318043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:12.784 [2024-11-20 18:38:31.318051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:12.784 [2024-11-20 18:38:31.318059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:12.784 [2024-11-20 18:38:31.318067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:12.784 [2024-11-20 18:38:31.318074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:12.784 [2024-11-20 18:38:31.318084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:12.784 [2024-11-20 18:38:31.318126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:12.784 [2024-11-20 18:38:31.318135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:12.784 [2024-11-20 18:38:31.318145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:12.784 [2024-11-20 18:38:31.318154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:12.784 [2024-11-20 18:38:31.318162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:12.784 [2024-11-20 18:38:31.318170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:12.785 [2024-11-20 18:38:31.318179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:12.785 [2024-11-20 18:38:31.318189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:12.785 [2024-11-20 18:38:31.318198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:12.785 [2024-11-20 18:38:31.318206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:12.785 [2024-11-20 18:38:31.318214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:12.785 [2024-11-20 18:38:31.318223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:12.785 [2024-11-20 18:38:31.318231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:12.785 [2024-11-20 18:38:31.318240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:12.785 [2024-11-20 18:38:31.318248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:12.785 [2024-11-20 18:38:31.318257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:12.785 [2024-11-20 18:38:31.318265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:12.785 [2024-11-20 18:38:31.318273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:12.785 [2024-11-20 18:38:31.318283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:12.785 [2024-11-20 18:38:31.318291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:12.785 [2024-11-20 18:38:31.318298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:12.785 [2024-11-20 18:38:31.318306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:12.785 [2024-11-20 18:38:31.318314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:12.785 [2024-11-20 18:38:31.318321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:12.785 [2024-11-20 18:38:31.318331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:12.785 [2024-11-20 18:38:31.318338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:12.785 [2024-11-20 18:38:31.318345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:12.785 [2024-11-20 18:38:31.318355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:12.785 [2024-11-20 18:38:31.318365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:12.785 [2024-11-20 18:38:31.318372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:12.785 [2024-11-20 18:38:31.318380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:12.785 [2024-11-20 18:38:31.318387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:12.785 [2024-11-20 18:38:31.318396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:12.785 [2024-11-20 18:38:31.318403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:12.785 [2024-11-20 18:38:31.318412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:12.785 [2024-11-20 18:38:31.318421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:12.785 [2024-11-20 18:38:31.318430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:12.785 [2024-11-20 18:38:31.318450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:12.785 [2024-11-20 18:38:31.318457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:12.785 [2024-11-20 18:38:31.318465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:12.785 [2024-11-20 18:38:31.318473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:12.785 [2024-11-20 18:38:31.318480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:12.785 [2024-11-20 18:38:31.318488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:12.785 [2024-11-20 18:38:31.318496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:12.785 [2024-11-20 18:38:31.318506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:12.785 [2024-11-20 18:38:31.318514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:12.785 [2024-11-20 18:38:31.318522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:12.785 [2024-11-20 18:38:31.318530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:12.785 [2024-11-20 18:38:31.318538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:12.785 [2024-11-20 18:38:31.318546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:12.785 [2024-11-20 18:38:31.318554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:12.785 [2024-11-20 18:38:31.318563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:12.785 [2024-11-20 18:38:31.318573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:12.785 [2024-11-20 18:38:31.318581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:12.785 [2024-11-20 18:38:31.318588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:12.785 [2024-11-20 18:38:31.318596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:12.785 [2024-11-20 18:38:31.318604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:12.785 [2024-11-20 18:38:31.318612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:12.785 [2024-11-20 18:38:31.318621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:12.785 [2024-11-20 18:38:31.318629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:12.785 [2024-11-20 18:38:31.318636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:12.785 [2024-11-20 18:38:31.318644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:12.785 [2024-11-20 18:38:31.318653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:12.785 [2024-11-20 18:38:31.318660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:12.785 [2024-11-20 18:38:31.318668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:12.785 [2024-11-20 18:38:31.318677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:12.785 [2024-11-20 18:38:31.318687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:12.785 [2024-11-20 18:38:31.318695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:12.785 [2024-11-20 18:38:31.318703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:12.785 [2024-11-20 18:38:31.318711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:12.785 [2024-11-20 18:38:31.318720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:12.785 [2024-11-20 18:38:31.318728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:12.785 [2024-11-20 18:38:31.318736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:12.785 [2024-11-20 18:38:31.318752] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:12.785 [2024-11-20 18:38:31.318760] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0be3887f-60af-4a3e-bbbe-2f44c2e518c4 00:29:12.785 [2024-11-20 18:38:31.318768] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:29:12.785 [2024-11-20 18:38:31.318775] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 263616 00:29:12.785 [2024-11-20 18:38:31.318785] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 261632 00:29:12.785 [2024-11-20 18:38:31.318794] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0076 00:29:12.785 [2024-11-20 18:38:31.318808] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:12.786 [2024-11-20 18:38:31.318816] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:12.786 [2024-11-20 18:38:31.318823] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:12.786 [2024-11-20 18:38:31.318837] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:12.786 [2024-11-20 18:38:31.318845] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:12.786 [2024-11-20 18:38:31.318854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.786 [2024-11-20 18:38:31.318863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:12.786 [2024-11-20 18:38:31.318871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.006 ms 00:29:12.786 [2024-11-20 18:38:31.318879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.786 [2024-11-20 18:38:31.332630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.786 [2024-11-20 18:38:31.332832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:12.786 [2024-11-20 18:38:31.332860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.731 ms 00:29:12.786 [2024-11-20 18:38:31.332869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.786 [2024-11-20 18:38:31.333309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.786 [2024-11-20 18:38:31.333325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:12.786 [2024-11-20 18:38:31.333337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.401 ms 00:29:12.786 [2024-11-20 18:38:31.333345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.786 [2024-11-20 18:38:31.369716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:12.786 [2024-11-20 18:38:31.369770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:12.786 [2024-11-20 18:38:31.369781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:12.786 [2024-11-20 18:38:31.369790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.786 [2024-11-20 18:38:31.369849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:12.786 [2024-11-20 18:38:31.369858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:12.786 [2024-11-20 18:38:31.369867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:12.786 [2024-11-20 18:38:31.369876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.786 [2024-11-20 18:38:31.369964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:12.786 [2024-11-20 18:38:31.369978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:12.786 [2024-11-20 18:38:31.369991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:12.786 [2024-11-20 18:38:31.369999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.786 [2024-11-20 18:38:31.370016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:12.786 [2024-11-20 18:38:31.370025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:12.786 [2024-11-20 18:38:31.370033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:12.786 [2024-11-20 18:38:31.370042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.047 [2024-11-20 18:38:31.454120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:13.047 [2024-11-20 18:38:31.454180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:13.048 [2024-11-20 18:38:31.454193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:13.048 [2024-11-20 18:38:31.454202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.048 [2024-11-20 18:38:31.523434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:13.048 [2024-11-20 18:38:31.523714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:13.048 [2024-11-20 18:38:31.523734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:13.048 [2024-11-20 18:38:31.523745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.048 [2024-11-20 18:38:31.523802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:13.048 [2024-11-20 18:38:31.523813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:13.048 [2024-11-20 18:38:31.523821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:13.048 [2024-11-20 18:38:31.523837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.048 [2024-11-20 18:38:31.523900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:13.048 [2024-11-20 18:38:31.523913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:13.048 [2024-11-20 18:38:31.523923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:13.048 [2024-11-20 18:38:31.523933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.048 [2024-11-20 18:38:31.524044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:13.048 [2024-11-20 18:38:31.524055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:13.048 [2024-11-20 18:38:31.524067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:13.048 [2024-11-20 18:38:31.524078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.048 [2024-11-20 18:38:31.524154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:13.048 [2024-11-20 18:38:31.524165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:13.048 [2024-11-20 18:38:31.524174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:13.048 [2024-11-20 18:38:31.524182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.048 [2024-11-20 18:38:31.524224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:13.048 [2024-11-20 18:38:31.524237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:13.048 [2024-11-20 18:38:31.524246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:13.048 [2024-11-20 18:38:31.524254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.048 [2024-11-20 18:38:31.524305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:13.048 [2024-11-20 18:38:31.524318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:13.048 [2024-11-20 18:38:31.524329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:13.048 [2024-11-20 18:38:31.524338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.048 [2024-11-20 18:38:31.524473] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 386.728 ms, result 0 00:29:13.989 00:29:13.989 00:29:13.989 18:38:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:29:15.905 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:29:15.905 18:38:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:16.167 [2024-11-20 18:38:34.594519] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:29:16.167 [2024-11-20 18:38:34.594660] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82315 ] 00:29:16.167 [2024-11-20 18:38:34.758903] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:16.429 [2024-11-20 18:38:34.879542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:16.691 [2024-11-20 18:38:35.170850] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:16.691 [2024-11-20 18:38:35.170935] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:16.954 [2024-11-20 18:38:35.334601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.954 [2024-11-20 18:38:35.334662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:16.954 [2024-11-20 18:38:35.334682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:16.954 [2024-11-20 18:38:35.334692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.954 [2024-11-20 18:38:35.334750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.954 [2024-11-20 18:38:35.334761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:16.954 [2024-11-20 18:38:35.334773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:29:16.954 [2024-11-20 18:38:35.334781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.954 [2024-11-20 18:38:35.334802] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:16.954 [2024-11-20 18:38:35.335600] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:16.954 [2024-11-20 18:38:35.335632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.954 [2024-11-20 18:38:35.335642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:16.954 [2024-11-20 18:38:35.335652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.835 ms 00:29:16.954 [2024-11-20 18:38:35.335661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.954 [2024-11-20 18:38:35.337430] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:29:16.954 [2024-11-20 18:38:35.351657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.954 [2024-11-20 18:38:35.351708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:16.954 [2024-11-20 18:38:35.351722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.228 ms 00:29:16.954 [2024-11-20 18:38:35.351731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.954 [2024-11-20 18:38:35.351814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.954 [2024-11-20 18:38:35.351824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:16.954 [2024-11-20 18:38:35.351833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:29:16.954 [2024-11-20 18:38:35.351842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.954 [2024-11-20 18:38:35.360262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.954 [2024-11-20 18:38:35.360305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:16.954 [2024-11-20 18:38:35.360317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.339 ms 00:29:16.954 [2024-11-20 18:38:35.360326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.954 [2024-11-20 18:38:35.360411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.954 [2024-11-20 18:38:35.360421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:16.954 [2024-11-20 18:38:35.360430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:29:16.954 [2024-11-20 18:38:35.360438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.954 [2024-11-20 18:38:35.360483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.954 [2024-11-20 18:38:35.360495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:16.954 [2024-11-20 18:38:35.360505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:29:16.954 [2024-11-20 18:38:35.360513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.954 [2024-11-20 18:38:35.360537] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:16.954 [2024-11-20 18:38:35.364552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.954 [2024-11-20 18:38:35.364594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:16.954 [2024-11-20 18:38:35.364606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.020 ms 00:29:16.954 [2024-11-20 18:38:35.364617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.954 [2024-11-20 18:38:35.364654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.954 [2024-11-20 18:38:35.364662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:16.954 [2024-11-20 18:38:35.364672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:29:16.954 [2024-11-20 18:38:35.364680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.954 [2024-11-20 18:38:35.364732] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:16.954 [2024-11-20 18:38:35.364755] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:16.954 [2024-11-20 18:38:35.364793] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:16.954 [2024-11-20 18:38:35.364815] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:16.954 [2024-11-20 18:38:35.364921] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:16.954 [2024-11-20 18:38:35.364935] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:16.954 [2024-11-20 18:38:35.364947] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:16.954 [2024-11-20 18:38:35.364959] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:16.954 [2024-11-20 18:38:35.364969] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:16.954 [2024-11-20 18:38:35.364979] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:16.954 [2024-11-20 18:38:35.364987] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:16.954 [2024-11-20 18:38:35.364996] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:16.954 [2024-11-20 18:38:35.365004] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:16.954 [2024-11-20 18:38:35.365015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.954 [2024-11-20 18:38:35.365024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:16.954 [2024-11-20 18:38:35.365033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.286 ms 00:29:16.954 [2024-11-20 18:38:35.365040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.954 [2024-11-20 18:38:35.365147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.954 [2024-11-20 18:38:35.365158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:16.954 [2024-11-20 18:38:35.365167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:29:16.954 [2024-11-20 18:38:35.365174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.954 [2024-11-20 18:38:35.365281] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:16.954 [2024-11-20 18:38:35.365297] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:16.954 [2024-11-20 18:38:35.365308] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:16.954 [2024-11-20 18:38:35.365317] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:16.954 [2024-11-20 18:38:35.365325] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:16.954 [2024-11-20 18:38:35.365333] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:16.954 [2024-11-20 18:38:35.365341] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:16.954 [2024-11-20 18:38:35.365351] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:16.954 [2024-11-20 18:38:35.365359] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:16.954 [2024-11-20 18:38:35.365366] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:16.954 [2024-11-20 18:38:35.365372] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:16.954 [2024-11-20 18:38:35.365379] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:16.954 [2024-11-20 18:38:35.365386] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:16.954 [2024-11-20 18:38:35.365396] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:16.954 [2024-11-20 18:38:35.365403] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:16.954 [2024-11-20 18:38:35.365415] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:16.954 [2024-11-20 18:38:35.365422] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:16.954 [2024-11-20 18:38:35.365429] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:16.954 [2024-11-20 18:38:35.365436] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:16.954 [2024-11-20 18:38:35.365444] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:16.954 [2024-11-20 18:38:35.365451] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:16.954 [2024-11-20 18:38:35.365458] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:16.954 [2024-11-20 18:38:35.365464] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:16.954 [2024-11-20 18:38:35.365471] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:16.954 [2024-11-20 18:38:35.365480] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:16.954 [2024-11-20 18:38:35.365488] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:16.954 [2024-11-20 18:38:35.365494] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:16.954 [2024-11-20 18:38:35.365500] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:16.954 [2024-11-20 18:38:35.365507] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:16.954 [2024-11-20 18:38:35.365513] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:16.954 [2024-11-20 18:38:35.365519] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:16.954 [2024-11-20 18:38:35.365525] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:16.954 [2024-11-20 18:38:35.365532] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:16.954 [2024-11-20 18:38:35.365539] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:16.954 [2024-11-20 18:38:35.365545] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:16.954 [2024-11-20 18:38:35.365551] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:16.954 [2024-11-20 18:38:35.365557] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:16.955 [2024-11-20 18:38:35.365563] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:16.955 [2024-11-20 18:38:35.365570] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:16.955 [2024-11-20 18:38:35.365576] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:16.955 [2024-11-20 18:38:35.365582] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:16.955 [2024-11-20 18:38:35.365592] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:16.955 [2024-11-20 18:38:35.365599] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:16.955 [2024-11-20 18:38:35.365607] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:16.955 [2024-11-20 18:38:35.365615] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:16.955 [2024-11-20 18:38:35.365625] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:16.955 [2024-11-20 18:38:35.365635] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:16.955 [2024-11-20 18:38:35.365643] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:16.955 [2024-11-20 18:38:35.365651] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:16.955 [2024-11-20 18:38:35.365657] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:16.955 [2024-11-20 18:38:35.365664] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:16.955 [2024-11-20 18:38:35.365671] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:16.955 [2024-11-20 18:38:35.365677] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:16.955 [2024-11-20 18:38:35.365687] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:16.955 [2024-11-20 18:38:35.365698] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:16.955 [2024-11-20 18:38:35.365706] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:16.955 [2024-11-20 18:38:35.365713] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:16.955 [2024-11-20 18:38:35.365720] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:16.955 [2024-11-20 18:38:35.365727] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:16.955 [2024-11-20 18:38:35.365735] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:16.955 [2024-11-20 18:38:35.365743] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:16.955 [2024-11-20 18:38:35.365749] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:16.955 [2024-11-20 18:38:35.365756] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:16.955 [2024-11-20 18:38:35.365763] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:16.955 [2024-11-20 18:38:35.365771] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:16.955 [2024-11-20 18:38:35.365779] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:16.955 [2024-11-20 18:38:35.365786] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:16.955 [2024-11-20 18:38:35.365794] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:16.955 [2024-11-20 18:38:35.365801] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:16.955 [2024-11-20 18:38:35.365807] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:16.955 [2024-11-20 18:38:35.365821] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:16.955 [2024-11-20 18:38:35.365829] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:16.955 [2024-11-20 18:38:35.365836] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:16.955 [2024-11-20 18:38:35.365843] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:16.955 [2024-11-20 18:38:35.365850] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:16.955 [2024-11-20 18:38:35.365859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.955 [2024-11-20 18:38:35.365868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:16.955 [2024-11-20 18:38:35.365877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.648 ms 00:29:16.955 [2024-11-20 18:38:35.365885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.955 [2024-11-20 18:38:35.398246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.955 [2024-11-20 18:38:35.398294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:16.955 [2024-11-20 18:38:35.398306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.314 ms 00:29:16.955 [2024-11-20 18:38:35.398315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.955 [2024-11-20 18:38:35.398412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.955 [2024-11-20 18:38:35.398420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:16.955 [2024-11-20 18:38:35.398429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:29:16.955 [2024-11-20 18:38:35.398437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.955 [2024-11-20 18:38:35.442882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.955 [2024-11-20 18:38:35.442939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:16.955 [2024-11-20 18:38:35.442953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.386 ms 00:29:16.955 [2024-11-20 18:38:35.442963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.955 [2024-11-20 18:38:35.443013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.955 [2024-11-20 18:38:35.443024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:16.955 [2024-11-20 18:38:35.443034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:16.955 [2024-11-20 18:38:35.443046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.955 [2024-11-20 18:38:35.443767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.955 [2024-11-20 18:38:35.443811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:16.955 [2024-11-20 18:38:35.443823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.616 ms 00:29:16.955 [2024-11-20 18:38:35.443832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.955 [2024-11-20 18:38:35.443988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.955 [2024-11-20 18:38:35.444000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:16.955 [2024-11-20 18:38:35.444009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:29:16.955 [2024-11-20 18:38:35.444024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.955 [2024-11-20 18:38:35.459886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.955 [2024-11-20 18:38:35.459932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:16.955 [2024-11-20 18:38:35.459946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.841 ms 00:29:16.955 [2024-11-20 18:38:35.459955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.955 [2024-11-20 18:38:35.474496] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:16.955 [2024-11-20 18:38:35.474692] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:16.955 [2024-11-20 18:38:35.474712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.955 [2024-11-20 18:38:35.474722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:16.955 [2024-11-20 18:38:35.474732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.648 ms 00:29:16.955 [2024-11-20 18:38:35.474740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.955 [2024-11-20 18:38:35.500717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.955 [2024-11-20 18:38:35.500777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:16.955 [2024-11-20 18:38:35.500789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.816 ms 00:29:16.955 [2024-11-20 18:38:35.500798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.955 [2024-11-20 18:38:35.513820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.955 [2024-11-20 18:38:35.513869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:16.955 [2024-11-20 18:38:35.513881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.966 ms 00:29:16.955 [2024-11-20 18:38:35.513889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.955 [2024-11-20 18:38:35.526870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.955 [2024-11-20 18:38:35.526915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:16.955 [2024-11-20 18:38:35.526928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.932 ms 00:29:16.955 [2024-11-20 18:38:35.526936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.955 [2024-11-20 18:38:35.527665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.955 [2024-11-20 18:38:35.527700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:16.955 [2024-11-20 18:38:35.527712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.617 ms 00:29:16.955 [2024-11-20 18:38:35.527724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.217 [2024-11-20 18:38:35.594483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.217 [2024-11-20 18:38:35.594551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:17.217 [2024-11-20 18:38:35.594574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.737 ms 00:29:17.217 [2024-11-20 18:38:35.594584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.217 [2024-11-20 18:38:35.605835] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:17.217 [2024-11-20 18:38:35.609057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.217 [2024-11-20 18:38:35.609124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:17.217 [2024-11-20 18:38:35.609138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.409 ms 00:29:17.217 [2024-11-20 18:38:35.609147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.217 [2024-11-20 18:38:35.609241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.217 [2024-11-20 18:38:35.609255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:17.217 [2024-11-20 18:38:35.609268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:29:17.217 [2024-11-20 18:38:35.609281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.217 [2024-11-20 18:38:35.610201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.217 [2024-11-20 18:38:35.610247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:17.217 [2024-11-20 18:38:35.610260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.881 ms 00:29:17.217 [2024-11-20 18:38:35.610269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.217 [2024-11-20 18:38:35.610300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.217 [2024-11-20 18:38:35.610309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:17.217 [2024-11-20 18:38:35.610320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:29:17.217 [2024-11-20 18:38:35.610328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.217 [2024-11-20 18:38:35.610372] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:17.217 [2024-11-20 18:38:35.610388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.217 [2024-11-20 18:38:35.610398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:17.217 [2024-11-20 18:38:35.610408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:29:17.217 [2024-11-20 18:38:35.610418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.217 [2024-11-20 18:38:35.636438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.217 [2024-11-20 18:38:35.636489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:17.217 [2024-11-20 18:38:35.636503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.000 ms 00:29:17.217 [2024-11-20 18:38:35.636518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.217 [2024-11-20 18:38:35.636609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.217 [2024-11-20 18:38:35.636620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:17.217 [2024-11-20 18:38:35.636629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:29:17.217 [2024-11-20 18:38:35.636637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.217 [2024-11-20 18:38:35.637897] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 302.800 ms, result 0 00:29:18.602  [2024-11-20T18:38:38.170Z] Copying: 10/1024 [MB] (10 MBps) [2024-11-20T18:38:39.112Z] Copying: 21/1024 [MB] (10 MBps) [2024-11-20T18:38:40.055Z] Copying: 32/1024 [MB] (10 MBps) [2024-11-20T18:38:40.997Z] Copying: 46/1024 [MB] (13 MBps) [2024-11-20T18:38:41.941Z] Copying: 60/1024 [MB] (14 MBps) [2024-11-20T18:38:42.883Z] Copying: 78/1024 [MB] (18 MBps) [2024-11-20T18:38:43.826Z] Copying: 89/1024 [MB] (10 MBps) [2024-11-20T18:38:45.211Z] Copying: 102/1024 [MB] (13 MBps) [2024-11-20T18:38:46.152Z] Copying: 114/1024 [MB] (11 MBps) [2024-11-20T18:38:47.096Z] Copying: 129/1024 [MB] (14 MBps) [2024-11-20T18:38:48.038Z] Copying: 147/1024 [MB] (18 MBps) [2024-11-20T18:38:48.978Z] Copying: 158/1024 [MB] (11 MBps) [2024-11-20T18:38:49.917Z] Copying: 169/1024 [MB] (10 MBps) [2024-11-20T18:38:50.857Z] Copying: 183/1024 [MB] (14 MBps) [2024-11-20T18:38:52.239Z] Copying: 194/1024 [MB] (10 MBps) [2024-11-20T18:38:53.180Z] Copying: 205/1024 [MB] (11 MBps) [2024-11-20T18:38:54.119Z] Copying: 226/1024 [MB] (20 MBps) [2024-11-20T18:38:55.055Z] Copying: 242/1024 [MB] (15 MBps) [2024-11-20T18:38:55.991Z] Copying: 254/1024 [MB] (12 MBps) [2024-11-20T18:38:56.925Z] Copying: 266/1024 [MB] (12 MBps) [2024-11-20T18:38:57.860Z] Copying: 284/1024 [MB] (18 MBps) [2024-11-20T18:38:58.894Z] Copying: 313/1024 [MB] (28 MBps) [2024-11-20T18:38:59.832Z] Copying: 323/1024 [MB] (10 MBps) [2024-11-20T18:39:01.214Z] Copying: 336/1024 [MB] (12 MBps) [2024-11-20T18:39:02.157Z] Copying: 363/1024 [MB] (26 MBps) [2024-11-20T18:39:03.101Z] Copying: 374/1024 [MB] (11 MBps) [2024-11-20T18:39:04.043Z] Copying: 386/1024 [MB] (11 MBps) [2024-11-20T18:39:04.986Z] Copying: 397/1024 [MB] (10 MBps) [2024-11-20T18:39:05.929Z] Copying: 408/1024 [MB] (10 MBps) [2024-11-20T18:39:06.872Z] Copying: 420/1024 [MB] (12 MBps) [2024-11-20T18:39:08.253Z] Copying: 431/1024 [MB] (11 MBps) [2024-11-20T18:39:08.826Z] Copying: 454/1024 [MB] (23 MBps) [2024-11-20T18:39:10.209Z] Copying: 469/1024 [MB] (14 MBps) [2024-11-20T18:39:11.149Z] Copying: 482/1024 [MB] (13 MBps) [2024-11-20T18:39:12.091Z] Copying: 506/1024 [MB] (24 MBps) [2024-11-20T18:39:13.034Z] Copying: 527/1024 [MB] (20 MBps) [2024-11-20T18:39:13.974Z] Copying: 546/1024 [MB] (18 MBps) [2024-11-20T18:39:14.915Z] Copying: 572/1024 [MB] (25 MBps) [2024-11-20T18:39:15.859Z] Copying: 596/1024 [MB] (23 MBps) [2024-11-20T18:39:17.248Z] Copying: 617/1024 [MB] (21 MBps) [2024-11-20T18:39:17.821Z] Copying: 635/1024 [MB] (17 MBps) [2024-11-20T18:39:19.208Z] Copying: 653/1024 [MB] (18 MBps) [2024-11-20T18:39:20.148Z] Copying: 663/1024 [MB] (10 MBps) [2024-11-20T18:39:21.092Z] Copying: 679/1024 [MB] (15 MBps) [2024-11-20T18:39:22.034Z] Copying: 689/1024 [MB] (10 MBps) [2024-11-20T18:39:22.980Z] Copying: 706/1024 [MB] (16 MBps) [2024-11-20T18:39:23.925Z] Copying: 719/1024 [MB] (12 MBps) [2024-11-20T18:39:24.872Z] Copying: 730/1024 [MB] (11 MBps) [2024-11-20T18:39:26.261Z] Copying: 743/1024 [MB] (13 MBps) [2024-11-20T18:39:26.834Z] Copying: 757/1024 [MB] (13 MBps) [2024-11-20T18:39:28.224Z] Copying: 768/1024 [MB] (10 MBps) [2024-11-20T18:39:29.169Z] Copying: 786/1024 [MB] (18 MBps) [2024-11-20T18:39:30.108Z] Copying: 800/1024 [MB] (14 MBps) [2024-11-20T18:39:31.051Z] Copying: 821/1024 [MB] (20 MBps) [2024-11-20T18:39:31.996Z] Copying: 840/1024 [MB] (18 MBps) [2024-11-20T18:39:32.939Z] Copying: 851/1024 [MB] (11 MBps) [2024-11-20T18:39:33.952Z] Copying: 871/1024 [MB] (19 MBps) [2024-11-20T18:39:34.943Z] Copying: 892/1024 [MB] (21 MBps) [2024-11-20T18:39:35.888Z] Copying: 912/1024 [MB] (19 MBps) [2024-11-20T18:39:36.833Z] Copying: 932/1024 [MB] (20 MBps) [2024-11-20T18:39:38.219Z] Copying: 953/1024 [MB] (21 MBps) [2024-11-20T18:39:39.162Z] Copying: 968/1024 [MB] (15 MBps) [2024-11-20T18:39:40.108Z] Copying: 989/1024 [MB] (20 MBps) [2024-11-20T18:39:41.054Z] Copying: 1006/1024 [MB] (16 MBps) [2024-11-20T18:39:41.316Z] Copying: 1018/1024 [MB] (11 MBps) [2024-11-20T18:39:41.579Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-20 18:39:41.361312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:22.950 [2024-11-20 18:39:41.362484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:22.950 [2024-11-20 18:39:41.362842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:30:22.950 [2024-11-20 18:39:41.362901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.950 [2024-11-20 18:39:41.363315] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:22.950 [2024-11-20 18:39:41.367159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:22.950 [2024-11-20 18:39:41.367422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:22.950 [2024-11-20 18:39:41.367533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.604 ms 00:30:22.950 [2024-11-20 18:39:41.367630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.950 [2024-11-20 18:39:41.368054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:22.950 [2024-11-20 18:39:41.368272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:22.950 [2024-11-20 18:39:41.368380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.233 ms 00:30:22.950 [2024-11-20 18:39:41.368468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.950 [2024-11-20 18:39:41.373289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:22.950 [2024-11-20 18:39:41.373521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:30:22.950 [2024-11-20 18:39:41.373803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.737 ms 00:30:22.950 [2024-11-20 18:39:41.373876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.950 [2024-11-20 18:39:41.380403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:22.950 [2024-11-20 18:39:41.380643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:30:22.950 [2024-11-20 18:39:41.380767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.369 ms 00:30:22.950 [2024-11-20 18:39:41.380836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.950 [2024-11-20 18:39:41.408702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:22.950 [2024-11-20 18:39:41.408834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:30:22.950 [2024-11-20 18:39:41.408900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.678 ms 00:30:22.950 [2024-11-20 18:39:41.409080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.950 [2024-11-20 18:39:41.427026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:22.950 [2024-11-20 18:39:41.427294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:30:22.950 [2024-11-20 18:39:41.427500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.585 ms 00:30:22.950 [2024-11-20 18:39:41.427567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.950 [2024-11-20 18:39:41.432342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:22.950 [2024-11-20 18:39:41.432564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:30:22.950 [2024-11-20 18:39:41.432718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.664 ms 00:30:22.950 [2024-11-20 18:39:41.432791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.950 [2024-11-20 18:39:41.459041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:22.950 [2024-11-20 18:39:41.459305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:30:22.950 [2024-11-20 18:39:41.459441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.100 ms 00:30:22.950 [2024-11-20 18:39:41.459507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.950 [2024-11-20 18:39:41.485313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:22.950 [2024-11-20 18:39:41.485563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:30:22.950 [2024-11-20 18:39:41.485853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.626 ms 00:30:22.950 [2024-11-20 18:39:41.485935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.950 [2024-11-20 18:39:41.510963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:22.950 [2024-11-20 18:39:41.511212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:22.950 [2024-11-20 18:39:41.511531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.931 ms 00:30:22.950 [2024-11-20 18:39:41.511663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.950 [2024-11-20 18:39:41.535863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:22.950 [2024-11-20 18:39:41.536145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:22.950 [2024-11-20 18:39:41.536355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.068 ms 00:30:22.950 [2024-11-20 18:39:41.536428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.950 [2024-11-20 18:39:41.536515] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:22.950 [2024-11-20 18:39:41.536756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:30:22.950 [2024-11-20 18:39:41.536846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:30:22.950 [2024-11-20 18:39:41.537206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:22.950 [2024-11-20 18:39:41.537280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:22.950 [2024-11-20 18:39:41.537431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:22.950 [2024-11-20 18:39:41.537547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:22.950 [2024-11-20 18:39:41.537679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:22.950 [2024-11-20 18:39:41.537791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:22.950 [2024-11-20 18:39:41.537863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:22.950 [2024-11-20 18:39:41.537897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:22.950 [2024-11-20 18:39:41.537962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:22.950 [2024-11-20 18:39:41.538024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:22.950 [2024-11-20 18:39:41.538198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:22.950 [2024-11-20 18:39:41.538212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:22.950 [2024-11-20 18:39:41.538222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:22.950 [2024-11-20 18:39:41.538230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:22.951 [2024-11-20 18:39:41.538238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:22.951 [2024-11-20 18:39:41.538246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:22.951 [2024-11-20 18:39:41.538254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:22.951 [2024-11-20 18:39:41.538262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:22.951 [2024-11-20 18:39:41.538270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:22.951 [2024-11-20 18:39:41.538278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:22.951 [2024-11-20 18:39:41.538286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:22.951 [2024-11-20 18:39:41.538294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:22.951 [2024-11-20 18:39:41.538301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:22.951 [2024-11-20 18:39:41.538310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:22.951 [2024-11-20 18:39:41.538317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:22.951 [2024-11-20 18:39:41.538325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:22.951 [2024-11-20 18:39:41.538332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:22.951 [2024-11-20 18:39:41.538339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:22.951 [2024-11-20 18:39:41.538347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:22.951 [2024-11-20 18:39:41.538354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:22.951 [2024-11-20 18:39:41.538361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:22.951 [2024-11-20 18:39:41.538368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:22.951 [2024-11-20 18:39:41.538376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:22.951 [2024-11-20 18:39:41.538383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:22.951 [2024-11-20 18:39:41.538391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:22.951 [2024-11-20 18:39:41.538398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:22.951 [2024-11-20 18:39:41.538405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:22.951 [2024-11-20 18:39:41.538412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:22.951 [2024-11-20 18:39:41.538420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:22.951 [2024-11-20 18:39:41.538427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:22.951 [2024-11-20 18:39:41.538436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:22.951 [2024-11-20 18:39:41.538443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:22.951 [2024-11-20 18:39:41.538451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:22.951 [2024-11-20 18:39:41.538462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:22.951 [2024-11-20 18:39:41.538470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:22.951 [2024-11-20 18:39:41.538478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:22.951 [2024-11-20 18:39:41.538485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:22.951 [2024-11-20 18:39:41.538494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:22.951 [2024-11-20 18:39:41.538501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:22.951 [2024-11-20 18:39:41.538509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:22.951 [2024-11-20 18:39:41.538516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:22.951 [2024-11-20 18:39:41.538523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:22.951 [2024-11-20 18:39:41.538531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:22.951 [2024-11-20 18:39:41.538538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:22.951 [2024-11-20 18:39:41.538545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:22.951 [2024-11-20 18:39:41.538552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:22.951 [2024-11-20 18:39:41.538560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:22.951 [2024-11-20 18:39:41.538567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:22.951 [2024-11-20 18:39:41.538574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:22.951 [2024-11-20 18:39:41.538584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:22.951 [2024-11-20 18:39:41.538592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:22.951 [2024-11-20 18:39:41.538599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:22.951 [2024-11-20 18:39:41.538607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:22.951 [2024-11-20 18:39:41.538614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:22.951 [2024-11-20 18:39:41.538621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:22.951 [2024-11-20 18:39:41.538628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:22.951 [2024-11-20 18:39:41.538636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:22.951 [2024-11-20 18:39:41.538643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:22.951 [2024-11-20 18:39:41.538650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:22.951 [2024-11-20 18:39:41.538658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:22.951 [2024-11-20 18:39:41.538665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:22.951 [2024-11-20 18:39:41.538672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:22.951 [2024-11-20 18:39:41.538680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:22.951 [2024-11-20 18:39:41.538687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:22.951 [2024-11-20 18:39:41.538694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:22.951 [2024-11-20 18:39:41.538704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:22.951 [2024-11-20 18:39:41.538712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:22.951 [2024-11-20 18:39:41.538719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:22.951 [2024-11-20 18:39:41.538727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:22.951 [2024-11-20 18:39:41.538735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:22.951 [2024-11-20 18:39:41.538742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:22.951 [2024-11-20 18:39:41.538750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:22.951 [2024-11-20 18:39:41.538758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:22.951 [2024-11-20 18:39:41.538765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:22.951 [2024-11-20 18:39:41.538791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:22.951 [2024-11-20 18:39:41.538799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:22.951 [2024-11-20 18:39:41.538807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:22.951 [2024-11-20 18:39:41.538814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:22.951 [2024-11-20 18:39:41.538822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:22.951 [2024-11-20 18:39:41.538830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:22.951 [2024-11-20 18:39:41.538837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:22.951 [2024-11-20 18:39:41.538845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:22.951 [2024-11-20 18:39:41.538852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:22.951 [2024-11-20 18:39:41.538860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:22.951 [2024-11-20 18:39:41.538867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:22.951 [2024-11-20 18:39:41.538875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:22.951 [2024-11-20 18:39:41.538883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:22.951 [2024-11-20 18:39:41.538891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:22.951 [2024-11-20 18:39:41.538909] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:22.951 [2024-11-20 18:39:41.538922] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0be3887f-60af-4a3e-bbbe-2f44c2e518c4 00:30:22.951 [2024-11-20 18:39:41.538931] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:30:22.951 [2024-11-20 18:39:41.538940] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:30:22.951 [2024-11-20 18:39:41.538948] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:22.951 [2024-11-20 18:39:41.538957] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:22.951 [2024-11-20 18:39:41.538965] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:22.952 [2024-11-20 18:39:41.538974] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:22.952 [2024-11-20 18:39:41.538990] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:22.952 [2024-11-20 18:39:41.539000] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:22.952 [2024-11-20 18:39:41.539007] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:22.952 [2024-11-20 18:39:41.539016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:22.952 [2024-11-20 18:39:41.539024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:22.952 [2024-11-20 18:39:41.539034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.502 ms 00:30:22.952 [2024-11-20 18:39:41.539042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.952 [2024-11-20 18:39:41.556324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:22.952 [2024-11-20 18:39:41.556361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:22.952 [2024-11-20 18:39:41.556373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.253 ms 00:30:22.952 [2024-11-20 18:39:41.556381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.952 [2024-11-20 18:39:41.556816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:22.952 [2024-11-20 18:39:41.556826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:22.952 [2024-11-20 18:39:41.556842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.397 ms 00:30:22.952 [2024-11-20 18:39:41.556850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:23.214 [2024-11-20 18:39:41.595985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:23.214 [2024-11-20 18:39:41.596035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:23.214 [2024-11-20 18:39:41.596048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:23.214 [2024-11-20 18:39:41.596058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:23.214 [2024-11-20 18:39:41.596153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:23.214 [2024-11-20 18:39:41.596164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:23.214 [2024-11-20 18:39:41.596180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:23.214 [2024-11-20 18:39:41.596189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:23.214 [2024-11-20 18:39:41.596273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:23.214 [2024-11-20 18:39:41.596286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:23.214 [2024-11-20 18:39:41.596295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:23.214 [2024-11-20 18:39:41.596304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:23.214 [2024-11-20 18:39:41.596320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:23.214 [2024-11-20 18:39:41.596330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:23.214 [2024-11-20 18:39:41.596338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:23.214 [2024-11-20 18:39:41.596351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:23.214 [2024-11-20 18:39:41.686923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:23.214 [2024-11-20 18:39:41.687227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:23.214 [2024-11-20 18:39:41.687252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:23.214 [2024-11-20 18:39:41.687262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:23.214 [2024-11-20 18:39:41.764313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:23.214 [2024-11-20 18:39:41.764375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:23.214 [2024-11-20 18:39:41.764391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:23.214 [2024-11-20 18:39:41.764408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:23.214 [2024-11-20 18:39:41.764490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:23.214 [2024-11-20 18:39:41.764501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:23.214 [2024-11-20 18:39:41.764512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:23.214 [2024-11-20 18:39:41.764521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:23.214 [2024-11-20 18:39:41.764589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:23.214 [2024-11-20 18:39:41.764603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:23.214 [2024-11-20 18:39:41.764612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:23.214 [2024-11-20 18:39:41.764621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:23.214 [2024-11-20 18:39:41.764735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:23.214 [2024-11-20 18:39:41.764748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:23.214 [2024-11-20 18:39:41.764758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:23.214 [2024-11-20 18:39:41.764767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:23.214 [2024-11-20 18:39:41.764802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:23.214 [2024-11-20 18:39:41.764813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:23.214 [2024-11-20 18:39:41.764823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:23.214 [2024-11-20 18:39:41.764832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:23.214 [2024-11-20 18:39:41.764891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:23.214 [2024-11-20 18:39:41.764902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:23.214 [2024-11-20 18:39:41.764914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:23.214 [2024-11-20 18:39:41.764925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:23.214 [2024-11-20 18:39:41.764980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:23.214 [2024-11-20 18:39:41.764993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:23.214 [2024-11-20 18:39:41.765002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:23.214 [2024-11-20 18:39:41.765011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:23.214 [2024-11-20 18:39:41.765221] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 403.871 ms, result 0 00:30:24.159 00:30:24.159 00:30:24.159 18:39:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:30:26.708 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:30:26.708 18:39:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:30:26.708 18:39:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:30:26.708 18:39:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:26.708 18:39:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:30:26.708 18:39:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:30:26.708 18:39:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:30:26.708 18:39:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:30:26.708 Process with pid 80441 is not found 00:30:26.708 18:39:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 80441 00:30:26.708 18:39:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 80441 ']' 00:30:26.708 18:39:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 80441 00:30:26.708 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (80441) - No such process 00:30:26.708 18:39:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 80441 is not found' 00:30:26.708 18:39:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:30:26.708 Remove shared memory files 00:30:26.708 18:39:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:30:26.708 18:39:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:30:26.708 18:39:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:30:26.708 18:39:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:30:26.709 18:39:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:30:26.709 18:39:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:30:26.709 18:39:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:30:26.709 ************************************ 00:30:26.709 END TEST ftl_dirty_shutdown 00:30:26.709 ************************************ 00:30:26.709 00:30:26.709 real 4m7.419s 00:30:26.709 user 4m22.177s 00:30:26.709 sys 0m24.188s 00:30:26.709 18:39:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:26.709 18:39:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:26.709 18:39:45 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:30:26.709 18:39:45 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:26.709 18:39:45 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:26.709 18:39:45 ftl -- common/autotest_common.sh@10 -- # set +x 00:30:26.969 ************************************ 00:30:26.969 START TEST ftl_upgrade_shutdown 00:30:26.969 ************************************ 00:30:26.969 18:39:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:30:26.969 * Looking for test storage... 00:30:26.969 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:30:26.969 18:39:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:26.969 18:39:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:26.969 18:39:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:30:26.969 18:39:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:26.969 18:39:45 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:26.969 18:39:45 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:26.969 18:39:45 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:26.969 18:39:45 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:30:26.969 18:39:45 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:30:26.969 18:39:45 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:30:26.969 18:39:45 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:30:26.969 18:39:45 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:30:26.969 18:39:45 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:30:26.969 18:39:45 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:30:26.969 18:39:45 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:26.969 18:39:45 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:30:26.969 18:39:45 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:30:26.969 18:39:45 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:26.969 18:39:45 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:26.969 18:39:45 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:30:26.969 18:39:45 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:30:26.969 18:39:45 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:26.969 18:39:45 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:30:26.969 18:39:45 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:30:26.969 18:39:45 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:30:26.969 18:39:45 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:30:26.969 18:39:45 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:26.969 18:39:45 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:30:26.969 18:39:45 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:30:26.969 18:39:45 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:26.969 18:39:45 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:26.969 18:39:45 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:30:26.969 18:39:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:26.969 18:39:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:26.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:26.969 --rc genhtml_branch_coverage=1 00:30:26.969 --rc genhtml_function_coverage=1 00:30:26.969 --rc genhtml_legend=1 00:30:26.969 --rc geninfo_all_blocks=1 00:30:26.969 --rc geninfo_unexecuted_blocks=1 00:30:26.969 00:30:26.969 ' 00:30:26.969 18:39:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:26.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:26.969 --rc genhtml_branch_coverage=1 00:30:26.969 --rc genhtml_function_coverage=1 00:30:26.969 --rc genhtml_legend=1 00:30:26.969 --rc geninfo_all_blocks=1 00:30:26.969 --rc geninfo_unexecuted_blocks=1 00:30:26.969 00:30:26.969 ' 00:30:26.969 18:39:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:26.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:26.969 --rc genhtml_branch_coverage=1 00:30:26.969 --rc genhtml_function_coverage=1 00:30:26.969 --rc genhtml_legend=1 00:30:26.969 --rc geninfo_all_blocks=1 00:30:26.969 --rc geninfo_unexecuted_blocks=1 00:30:26.969 00:30:26.969 ' 00:30:26.969 18:39:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:26.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:26.969 --rc genhtml_branch_coverage=1 00:30:26.969 --rc genhtml_function_coverage=1 00:30:26.969 --rc genhtml_legend=1 00:30:26.969 --rc geninfo_all_blocks=1 00:30:26.969 --rc geninfo_unexecuted_blocks=1 00:30:26.969 00:30:26.969 ' 00:30:26.969 18:39:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:30:26.969 18:39:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:30:26.969 18:39:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:30:26.969 18:39:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:30:26.969 18:39:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:30:26.969 18:39:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:30:26.969 18:39:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:26.969 18:39:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:30:26.969 18:39:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:30:26.969 18:39:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:26.969 18:39:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:26.969 18:39:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:30:26.969 18:39:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:30:26.969 18:39:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:26.969 18:39:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:26.969 18:39:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:30:26.969 18:39:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:30:26.969 18:39:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:26.969 18:39:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:26.969 18:39:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:30:26.969 18:39:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:30:26.969 18:39:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:26.969 18:39:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:26.969 18:39:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:26.969 18:39:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:26.969 18:39:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:30:26.969 18:39:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:30:26.969 18:39:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:26.969 18:39:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:26.969 18:39:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:26.969 18:39:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:30:26.969 18:39:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:30:26.969 18:39:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:30:26.969 18:39:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:30:26.970 18:39:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:30:26.970 18:39:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:30:26.970 18:39:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:30:26.970 18:39:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:30:26.970 18:39:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:30:26.970 18:39:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:30:26.970 18:39:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:30:26.970 18:39:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:30:26.970 18:39:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:30:26.970 18:39:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:30:26.970 18:39:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:30:26.970 18:39:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:26.970 18:39:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=83091 00:30:26.970 18:39:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:30:26.970 18:39:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 83091 00:30:26.970 18:39:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:30:26.970 18:39:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83091 ']' 00:30:26.970 18:39:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:26.970 18:39:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:26.970 18:39:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:26.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:26.970 18:39:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:26.970 18:39:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:27.230 [2024-11-20 18:39:45.609946] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:30:27.230 [2024-11-20 18:39:45.610066] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83091 ] 00:30:27.230 [2024-11-20 18:39:45.764704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:27.491 [2024-11-20 18:39:45.871514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:28.064 18:39:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:28.064 18:39:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:30:28.064 18:39:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:28.064 18:39:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:30:28.064 18:39:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:30:28.064 18:39:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:28.064 18:39:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:30:28.064 18:39:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:28.064 18:39:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:30:28.064 18:39:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:28.064 18:39:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:30:28.064 18:39:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:28.064 18:39:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:30:28.064 18:39:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:28.064 18:39:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:30:28.064 18:39:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:28.064 18:39:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:30:28.064 18:39:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:30:28.064 18:39:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:30:28.064 18:39:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:30:28.064 18:39:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:30:28.064 18:39:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:30:28.064 18:39:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:30:28.324 18:39:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:30:28.324 18:39:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:30:28.324 18:39:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:30:28.324 18:39:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:30:28.324 18:39:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:28.324 18:39:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:30:28.324 18:39:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:30:28.324 18:39:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:30:28.586 18:39:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:28.586 { 00:30:28.586 "name": "basen1", 00:30:28.586 "aliases": [ 00:30:28.586 "4725f424-a2f6-4fc2-ae62-a0981bbe8655" 00:30:28.586 ], 00:30:28.586 "product_name": "NVMe disk", 00:30:28.586 "block_size": 4096, 00:30:28.586 "num_blocks": 1310720, 00:30:28.586 "uuid": "4725f424-a2f6-4fc2-ae62-a0981bbe8655", 00:30:28.586 "numa_id": -1, 00:30:28.586 "assigned_rate_limits": { 00:30:28.586 "rw_ios_per_sec": 0, 00:30:28.586 "rw_mbytes_per_sec": 0, 00:30:28.586 "r_mbytes_per_sec": 0, 00:30:28.586 "w_mbytes_per_sec": 0 00:30:28.586 }, 00:30:28.586 "claimed": true, 00:30:28.586 "claim_type": "read_many_write_one", 00:30:28.586 "zoned": false, 00:30:28.586 "supported_io_types": { 00:30:28.586 "read": true, 00:30:28.586 "write": true, 00:30:28.586 "unmap": true, 00:30:28.586 "flush": true, 00:30:28.586 "reset": true, 00:30:28.586 "nvme_admin": true, 00:30:28.586 "nvme_io": true, 00:30:28.586 "nvme_io_md": false, 00:30:28.586 "write_zeroes": true, 00:30:28.586 "zcopy": false, 00:30:28.586 "get_zone_info": false, 00:30:28.586 "zone_management": false, 00:30:28.586 "zone_append": false, 00:30:28.586 "compare": true, 00:30:28.586 "compare_and_write": false, 00:30:28.586 "abort": true, 00:30:28.586 "seek_hole": false, 00:30:28.586 "seek_data": false, 00:30:28.586 "copy": true, 00:30:28.586 "nvme_iov_md": false 00:30:28.586 }, 00:30:28.586 "driver_specific": { 00:30:28.586 "nvme": [ 00:30:28.586 { 00:30:28.586 "pci_address": "0000:00:11.0", 00:30:28.586 "trid": { 00:30:28.586 "trtype": "PCIe", 00:30:28.586 "traddr": "0000:00:11.0" 00:30:28.586 }, 00:30:28.586 "ctrlr_data": { 00:30:28.586 "cntlid": 0, 00:30:28.586 "vendor_id": "0x1b36", 00:30:28.586 "model_number": "QEMU NVMe Ctrl", 00:30:28.586 "serial_number": "12341", 00:30:28.586 "firmware_revision": "8.0.0", 00:30:28.586 "subnqn": "nqn.2019-08.org.qemu:12341", 00:30:28.586 "oacs": { 00:30:28.586 "security": 0, 00:30:28.586 "format": 1, 00:30:28.586 "firmware": 0, 00:30:28.586 "ns_manage": 1 00:30:28.586 }, 00:30:28.586 "multi_ctrlr": false, 00:30:28.586 "ana_reporting": false 00:30:28.586 }, 00:30:28.586 "vs": { 00:30:28.586 "nvme_version": "1.4" 00:30:28.586 }, 00:30:28.586 "ns_data": { 00:30:28.586 "id": 1, 00:30:28.586 "can_share": false 00:30:28.586 } 00:30:28.586 } 00:30:28.586 ], 00:30:28.586 "mp_policy": "active_passive" 00:30:28.586 } 00:30:28.586 } 00:30:28.586 ]' 00:30:28.586 18:39:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:28.586 18:39:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:30:28.586 18:39:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:28.586 18:39:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:30:28.586 18:39:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:30:28.586 18:39:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:30:28.586 18:39:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:30:28.586 18:39:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:30:28.586 18:39:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:30:28.586 18:39:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:28.586 18:39:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:30:28.847 18:39:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=da218ca9-3456-4d0b-9d3e-a4fe1c9f1d18 00:30:28.848 18:39:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:30:28.848 18:39:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u da218ca9-3456-4d0b-9d3e-a4fe1c9f1d18 00:30:29.108 18:39:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:30:29.369 18:39:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=2e760d3d-8d3e-477a-9ebe-245f4661448e 00:30:29.369 18:39:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 2e760d3d-8d3e-477a-9ebe-245f4661448e 00:30:29.630 18:39:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=406bce1d-4d97-4d4b-96ec-8df23aabe81e 00:30:29.630 18:39:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 406bce1d-4d97-4d4b-96ec-8df23aabe81e ]] 00:30:29.630 18:39:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 406bce1d-4d97-4d4b-96ec-8df23aabe81e 5120 00:30:29.630 18:39:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:30:29.630 18:39:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:30:29.630 18:39:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=406bce1d-4d97-4d4b-96ec-8df23aabe81e 00:30:29.630 18:39:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:30:29.630 18:39:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 406bce1d-4d97-4d4b-96ec-8df23aabe81e 00:30:29.630 18:39:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=406bce1d-4d97-4d4b-96ec-8df23aabe81e 00:30:29.630 18:39:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:29.630 18:39:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:30:29.630 18:39:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:30:29.630 18:39:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 406bce1d-4d97-4d4b-96ec-8df23aabe81e 00:30:29.630 18:39:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:29.630 { 00:30:29.630 "name": "406bce1d-4d97-4d4b-96ec-8df23aabe81e", 00:30:29.630 "aliases": [ 00:30:29.630 "lvs/basen1p0" 00:30:29.630 ], 00:30:29.630 "product_name": "Logical Volume", 00:30:29.630 "block_size": 4096, 00:30:29.630 "num_blocks": 5242880, 00:30:29.630 "uuid": "406bce1d-4d97-4d4b-96ec-8df23aabe81e", 00:30:29.630 "assigned_rate_limits": { 00:30:29.630 "rw_ios_per_sec": 0, 00:30:29.630 "rw_mbytes_per_sec": 0, 00:30:29.630 "r_mbytes_per_sec": 0, 00:30:29.630 "w_mbytes_per_sec": 0 00:30:29.630 }, 00:30:29.630 "claimed": false, 00:30:29.630 "zoned": false, 00:30:29.630 "supported_io_types": { 00:30:29.630 "read": true, 00:30:29.630 "write": true, 00:30:29.630 "unmap": true, 00:30:29.630 "flush": false, 00:30:29.630 "reset": true, 00:30:29.630 "nvme_admin": false, 00:30:29.630 "nvme_io": false, 00:30:29.630 "nvme_io_md": false, 00:30:29.630 "write_zeroes": true, 00:30:29.630 "zcopy": false, 00:30:29.630 "get_zone_info": false, 00:30:29.630 "zone_management": false, 00:30:29.630 "zone_append": false, 00:30:29.630 "compare": false, 00:30:29.630 "compare_and_write": false, 00:30:29.630 "abort": false, 00:30:29.630 "seek_hole": true, 00:30:29.630 "seek_data": true, 00:30:29.630 "copy": false, 00:30:29.630 "nvme_iov_md": false 00:30:29.630 }, 00:30:29.630 "driver_specific": { 00:30:29.630 "lvol": { 00:30:29.630 "lvol_store_uuid": "2e760d3d-8d3e-477a-9ebe-245f4661448e", 00:30:29.630 "base_bdev": "basen1", 00:30:29.630 "thin_provision": true, 00:30:29.630 "num_allocated_clusters": 0, 00:30:29.630 "snapshot": false, 00:30:29.630 "clone": false, 00:30:29.630 "esnap_clone": false 00:30:29.630 } 00:30:29.630 } 00:30:29.630 } 00:30:29.630 ]' 00:30:29.630 18:39:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:29.630 18:39:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:30:29.630 18:39:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:29.891 18:39:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:30:29.891 18:39:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:30:29.891 18:39:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:30:29.891 18:39:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:30:29.891 18:39:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:30:29.891 18:39:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:30:29.891 18:39:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:30:29.891 18:39:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:30:29.891 18:39:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:30:30.151 18:39:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:30:30.151 18:39:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:30:30.151 18:39:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 406bce1d-4d97-4d4b-96ec-8df23aabe81e -c cachen1p0 --l2p_dram_limit 2 00:30:30.414 [2024-11-20 18:39:48.892123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:30.414 [2024-11-20 18:39:48.892326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:30:30.414 [2024-11-20 18:39:48.892356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:30:30.414 [2024-11-20 18:39:48.892366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:30.414 [2024-11-20 18:39:48.892465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:30.414 [2024-11-20 18:39:48.892477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:30.414 [2024-11-20 18:39:48.892489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.061 ms 00:30:30.414 [2024-11-20 18:39:48.892498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:30.414 [2024-11-20 18:39:48.892523] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:30:30.414 [2024-11-20 18:39:48.893333] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:30:30.414 [2024-11-20 18:39:48.893359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:30.414 [2024-11-20 18:39:48.893368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:30.414 [2024-11-20 18:39:48.893380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.838 ms 00:30:30.414 [2024-11-20 18:39:48.893388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:30.414 [2024-11-20 18:39:48.893430] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID fc067ecb-7c25-46bc-a976-5179fb694498 00:30:30.414 [2024-11-20 18:39:48.895314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:30.414 [2024-11-20 18:39:48.895373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:30:30.414 [2024-11-20 18:39:48.895385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:30:30.414 [2024-11-20 18:39:48.895396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:30.414 [2024-11-20 18:39:48.903976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:30.414 [2024-11-20 18:39:48.904029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:30.414 [2024-11-20 18:39:48.904043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.480 ms 00:30:30.414 [2024-11-20 18:39:48.904053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:30.414 [2024-11-20 18:39:48.904121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:30.414 [2024-11-20 18:39:48.904134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:30.414 [2024-11-20 18:39:48.904143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.045 ms 00:30:30.414 [2024-11-20 18:39:48.904156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:30.414 [2024-11-20 18:39:48.904217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:30.414 [2024-11-20 18:39:48.904230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:30:30.414 [2024-11-20 18:39:48.904239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:30:30.414 [2024-11-20 18:39:48.904254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:30.414 [2024-11-20 18:39:48.904278] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:30:30.414 [2024-11-20 18:39:48.908642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:30.414 [2024-11-20 18:39:48.908684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:30.414 [2024-11-20 18:39:48.908699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.369 ms 00:30:30.414 [2024-11-20 18:39:48.908707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:30.414 [2024-11-20 18:39:48.908739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:30.414 [2024-11-20 18:39:48.908748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:30:30.414 [2024-11-20 18:39:48.908758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:30:30.414 [2024-11-20 18:39:48.908767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:30.414 [2024-11-20 18:39:48.908805] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:30:30.414 [2024-11-20 18:39:48.908952] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:30:30.415 [2024-11-20 18:39:48.908969] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:30:30.415 [2024-11-20 18:39:48.908982] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:30:30.415 [2024-11-20 18:39:48.908994] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:30:30.415 [2024-11-20 18:39:48.909004] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:30:30.415 [2024-11-20 18:39:48.909015] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:30:30.415 [2024-11-20 18:39:48.909023] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:30:30.415 [2024-11-20 18:39:48.909035] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:30:30.415 [2024-11-20 18:39:48.909042] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:30:30.415 [2024-11-20 18:39:48.909052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:30.415 [2024-11-20 18:39:48.909060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:30:30.415 [2024-11-20 18:39:48.909070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.250 ms 00:30:30.415 [2024-11-20 18:39:48.909077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:30.415 [2024-11-20 18:39:48.909183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:30.415 [2024-11-20 18:39:48.909193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:30:30.415 [2024-11-20 18:39:48.909206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:30:30.415 [2024-11-20 18:39:48.909221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:30.415 [2024-11-20 18:39:48.909329] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:30:30.415 [2024-11-20 18:39:48.909340] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:30:30.415 [2024-11-20 18:39:48.909350] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:30.415 [2024-11-20 18:39:48.909359] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:30.415 [2024-11-20 18:39:48.909369] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:30:30.415 [2024-11-20 18:39:48.909376] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:30:30.415 [2024-11-20 18:39:48.909384] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:30:30.415 [2024-11-20 18:39:48.909392] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:30:30.415 [2024-11-20 18:39:48.909401] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:30:30.415 [2024-11-20 18:39:48.909407] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:30.415 [2024-11-20 18:39:48.909416] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:30:30.415 [2024-11-20 18:39:48.909423] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:30:30.415 [2024-11-20 18:39:48.909432] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:30.415 [2024-11-20 18:39:48.909440] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:30:30.415 [2024-11-20 18:39:48.909449] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:30:30.415 [2024-11-20 18:39:48.909455] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:30.415 [2024-11-20 18:39:48.909466] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:30:30.415 [2024-11-20 18:39:48.909473] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:30:30.415 [2024-11-20 18:39:48.909483] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:30.415 [2024-11-20 18:39:48.909490] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:30:30.415 [2024-11-20 18:39:48.909499] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:30:30.415 [2024-11-20 18:39:48.909505] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:30.415 [2024-11-20 18:39:48.909516] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:30:30.415 [2024-11-20 18:39:48.909523] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:30:30.415 [2024-11-20 18:39:48.909532] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:30.415 [2024-11-20 18:39:48.909539] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:30:30.415 [2024-11-20 18:39:48.909548] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:30:30.415 [2024-11-20 18:39:48.909554] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:30.415 [2024-11-20 18:39:48.909563] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:30:30.415 [2024-11-20 18:39:48.909570] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:30:30.415 [2024-11-20 18:39:48.909578] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:30.415 [2024-11-20 18:39:48.909584] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:30:30.415 [2024-11-20 18:39:48.909595] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:30:30.415 [2024-11-20 18:39:48.909602] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:30.415 [2024-11-20 18:39:48.909610] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:30:30.415 [2024-11-20 18:39:48.909617] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:30:30.415 [2024-11-20 18:39:48.909625] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:30.415 [2024-11-20 18:39:48.909632] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:30:30.415 [2024-11-20 18:39:48.909641] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:30:30.415 [2024-11-20 18:39:48.909647] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:30.415 [2024-11-20 18:39:48.909655] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:30:30.415 [2024-11-20 18:39:48.909662] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:30:30.415 [2024-11-20 18:39:48.909670] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:30.415 [2024-11-20 18:39:48.909676] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:30:30.415 [2024-11-20 18:39:48.909687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:30:30.415 [2024-11-20 18:39:48.909694] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:30.415 [2024-11-20 18:39:48.909704] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:30.415 [2024-11-20 18:39:48.909713] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:30:30.415 [2024-11-20 18:39:48.909724] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:30:30.415 [2024-11-20 18:39:48.909730] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:30:30.415 [2024-11-20 18:39:48.909739] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:30:30.415 [2024-11-20 18:39:48.909745] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:30:30.415 [2024-11-20 18:39:48.909753] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:30:30.415 [2024-11-20 18:39:48.909765] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:30:30.415 [2024-11-20 18:39:48.909778] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:30.415 [2024-11-20 18:39:48.909790] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:30:30.415 [2024-11-20 18:39:48.909800] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:30:30.415 [2024-11-20 18:39:48.909807] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:30:30.415 [2024-11-20 18:39:48.909816] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:30:30.415 [2024-11-20 18:39:48.909823] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:30:30.415 [2024-11-20 18:39:48.909833] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:30:30.415 [2024-11-20 18:39:48.909840] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:30:30.415 [2024-11-20 18:39:48.909848] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:30:30.415 [2024-11-20 18:39:48.909856] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:30:30.415 [2024-11-20 18:39:48.909867] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:30:30.415 [2024-11-20 18:39:48.909874] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:30:30.415 [2024-11-20 18:39:48.909883] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:30:30.415 [2024-11-20 18:39:48.909891] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:30:30.415 [2024-11-20 18:39:48.909901] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:30:30.415 [2024-11-20 18:39:48.909908] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:30:30.416 [2024-11-20 18:39:48.909918] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:30.416 [2024-11-20 18:39:48.909927] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:30.416 [2024-11-20 18:39:48.909937] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:30:30.416 [2024-11-20 18:39:48.909943] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:30:30.416 [2024-11-20 18:39:48.909953] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:30:30.416 [2024-11-20 18:39:48.909962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:30.416 [2024-11-20 18:39:48.909971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:30:30.416 [2024-11-20 18:39:48.909979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.705 ms 00:30:30.416 [2024-11-20 18:39:48.909988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:30.416 [2024-11-20 18:39:48.910026] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:30:30.416 [2024-11-20 18:39:48.910040] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:30:33.715 [2024-11-20 18:39:52.190935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.715 [2024-11-20 18:39:52.191018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:30:33.715 [2024-11-20 18:39:52.191037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3280.893 ms 00:30:33.715 [2024-11-20 18:39:52.191049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.715 [2024-11-20 18:39:52.223450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.715 [2024-11-20 18:39:52.223520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:33.715 [2024-11-20 18:39:52.223535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.127 ms 00:30:33.715 [2024-11-20 18:39:52.223546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.715 [2024-11-20 18:39:52.223642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.715 [2024-11-20 18:39:52.223657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:30:33.715 [2024-11-20 18:39:52.223666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:30:33.715 [2024-11-20 18:39:52.223679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.715 [2024-11-20 18:39:52.259361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.715 [2024-11-20 18:39:52.259411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:33.715 [2024-11-20 18:39:52.259423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.641 ms 00:30:33.715 [2024-11-20 18:39:52.259436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.715 [2024-11-20 18:39:52.259474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.715 [2024-11-20 18:39:52.259488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:33.715 [2024-11-20 18:39:52.259497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:33.715 [2024-11-20 18:39:52.259507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.715 [2024-11-20 18:39:52.260141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.715 [2024-11-20 18:39:52.260174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:33.715 [2024-11-20 18:39:52.260185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.512 ms 00:30:33.715 [2024-11-20 18:39:52.260197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.715 [2024-11-20 18:39:52.260270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.715 [2024-11-20 18:39:52.260282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:33.715 [2024-11-20 18:39:52.260293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:30:33.715 [2024-11-20 18:39:52.260306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.715 [2024-11-20 18:39:52.277936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.715 [2024-11-20 18:39:52.277992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:33.715 [2024-11-20 18:39:52.278005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.610 ms 00:30:33.715 [2024-11-20 18:39:52.278016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.715 [2024-11-20 18:39:52.291418] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:30:33.715 [2024-11-20 18:39:52.293015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.715 [2024-11-20 18:39:52.293065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:30:33.715 [2024-11-20 18:39:52.293080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.877 ms 00:30:33.715 [2024-11-20 18:39:52.293088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.715 [2024-11-20 18:39:52.329978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.715 [2024-11-20 18:39:52.330043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:30:33.715 [2024-11-20 18:39:52.330065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.826 ms 00:30:33.715 [2024-11-20 18:39:52.330074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.715 [2024-11-20 18:39:52.330211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.715 [2024-11-20 18:39:52.330228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:30:33.715 [2024-11-20 18:39:52.330244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.058 ms 00:30:33.715 [2024-11-20 18:39:52.330252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.976 [2024-11-20 18:39:52.356125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.976 [2024-11-20 18:39:52.356352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:30:33.976 [2024-11-20 18:39:52.356384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.809 ms 00:30:33.976 [2024-11-20 18:39:52.356394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.976 [2024-11-20 18:39:52.382649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.976 [2024-11-20 18:39:52.382707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:30:33.976 [2024-11-20 18:39:52.382724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.115 ms 00:30:33.976 [2024-11-20 18:39:52.382732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.976 [2024-11-20 18:39:52.383414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.976 [2024-11-20 18:39:52.383436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:30:33.976 [2024-11-20 18:39:52.383449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.627 ms 00:30:33.976 [2024-11-20 18:39:52.383456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.976 [2024-11-20 18:39:52.463466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.976 [2024-11-20 18:39:52.463524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:30:33.976 [2024-11-20 18:39:52.463545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 79.939 ms 00:30:33.976 [2024-11-20 18:39:52.463555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.976 [2024-11-20 18:39:52.491552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.976 [2024-11-20 18:39:52.491606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:30:33.976 [2024-11-20 18:39:52.491631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.870 ms 00:30:33.976 [2024-11-20 18:39:52.491641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.976 [2024-11-20 18:39:52.518592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.976 [2024-11-20 18:39:52.518811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:30:33.976 [2024-11-20 18:39:52.518842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.892 ms 00:30:33.976 [2024-11-20 18:39:52.518851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.976 [2024-11-20 18:39:52.546449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.976 [2024-11-20 18:39:52.546500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:30:33.976 [2024-11-20 18:39:52.546517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.547 ms 00:30:33.976 [2024-11-20 18:39:52.546524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.976 [2024-11-20 18:39:52.546584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.976 [2024-11-20 18:39:52.546596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:30:33.976 [2024-11-20 18:39:52.546612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:30:33.976 [2024-11-20 18:39:52.546620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.976 [2024-11-20 18:39:52.546732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.976 [2024-11-20 18:39:52.546744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:30:33.976 [2024-11-20 18:39:52.546758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.055 ms 00:30:33.976 [2024-11-20 18:39:52.546766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.976 [2024-11-20 18:39:52.547949] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3655.357 ms, result 0 00:30:33.976 { 00:30:33.976 "name": "ftl", 00:30:33.976 "uuid": "fc067ecb-7c25-46bc-a976-5179fb694498" 00:30:33.976 } 00:30:33.976 18:39:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:30:34.236 [2024-11-20 18:39:52.783087] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:34.236 18:39:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:30:34.496 18:39:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:30:34.756 [2024-11-20 18:39:53.219571] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:30:34.756 18:39:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:30:35.018 [2024-11-20 18:39:53.437029] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:35.018 18:39:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:30:35.277 Fill FTL, iteration 1 00:30:35.277 18:39:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:30:35.277 18:39:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:30:35.277 18:39:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:30:35.277 18:39:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:30:35.277 18:39:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:30:35.277 18:39:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:30:35.277 18:39:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:30:35.277 18:39:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:30:35.277 18:39:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:30:35.277 18:39:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:30:35.277 18:39:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:30:35.277 18:39:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:30:35.277 18:39:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:35.277 18:39:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:35.277 18:39:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:35.277 18:39:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:30:35.277 18:39:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=83213 00:30:35.277 18:39:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:30:35.277 18:39:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:30:35.277 18:39:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 83213 /var/tmp/spdk.tgt.sock 00:30:35.277 18:39:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83213 ']' 00:30:35.277 18:39:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:30:35.277 18:39:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:35.277 18:39:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:30:35.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:30:35.277 18:39:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:35.277 18:39:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:35.277 [2024-11-20 18:39:53.872551] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:30:35.277 [2024-11-20 18:39:53.872839] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83213 ] 00:30:35.535 [2024-11-20 18:39:54.028737] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:35.535 [2024-11-20 18:39:54.135215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:36.470 18:39:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:36.470 18:39:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:30:36.470 18:39:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:30:36.470 ftln1 00:30:36.470 18:39:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:30:36.470 18:39:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:30:36.728 18:39:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:30:36.728 18:39:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 83213 00:30:36.728 18:39:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83213 ']' 00:30:36.728 18:39:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83213 00:30:36.728 18:39:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:30:36.728 18:39:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:36.728 18:39:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83213 00:30:36.728 killing process with pid 83213 00:30:36.728 18:39:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:36.728 18:39:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:36.728 18:39:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83213' 00:30:36.728 18:39:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83213 00:30:36.728 18:39:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83213 00:30:38.628 18:39:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:30:38.628 18:39:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:30:38.628 [2024-11-20 18:39:56.824341] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:30:38.628 [2024-11-20 18:39:56.824585] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83255 ] 00:30:38.628 [2024-11-20 18:39:56.982767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:38.628 [2024-11-20 18:39:57.088757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:40.012  [2024-11-20T18:39:59.582Z] Copying: 238/1024 [MB] (238 MBps) [2024-11-20T18:40:00.524Z] Copying: 470/1024 [MB] (232 MBps) [2024-11-20T18:40:01.466Z] Copying: 706/1024 [MB] (236 MBps) [2024-11-20T18:40:02.037Z] Copying: 945/1024 [MB] (239 MBps) [2024-11-20T18:40:02.606Z] Copying: 1024/1024 [MB] (average 235 MBps) 00:30:43.977 00:30:43.977 Calculate MD5 checksum, iteration 1 00:30:43.977 18:40:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:30:43.977 18:40:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:30:43.977 18:40:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:43.977 18:40:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:43.977 18:40:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:43.977 18:40:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:43.977 18:40:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:43.977 18:40:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:43.977 [2024-11-20 18:40:02.476418] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:30:43.977 [2024-11-20 18:40:02.476686] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83319 ] 00:30:44.235 [2024-11-20 18:40:02.632893] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:44.235 [2024-11-20 18:40:02.719015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:45.614  [2024-11-20T18:40:04.503Z] Copying: 740/1024 [MB] (740 MBps) [2024-11-20T18:40:05.073Z] Copying: 1024/1024 [MB] (average 708 MBps) 00:30:46.444 00:30:46.444 18:40:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:30:46.444 18:40:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:48.981 18:40:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:30:48.981 18:40:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=38c3e805980537d77425176bd97a7bb1 00:30:48.981 18:40:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:30:48.981 18:40:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:30:48.981 18:40:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:30:48.981 Fill FTL, iteration 2 00:30:48.981 18:40:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:30:48.981 18:40:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:48.981 18:40:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:48.981 18:40:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:48.981 18:40:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:48.981 18:40:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:30:48.981 [2024-11-20 18:40:07.199007] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:30:48.981 [2024-11-20 18:40:07.199264] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83374 ] 00:30:48.981 [2024-11-20 18:40:07.358457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:48.981 [2024-11-20 18:40:07.464940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:50.431  [2024-11-20T18:40:10.003Z] Copying: 190/1024 [MB] (190 MBps) [2024-11-20T18:40:10.945Z] Copying: 418/1024 [MB] (228 MBps) [2024-11-20T18:40:11.886Z] Copying: 648/1024 [MB] (230 MBps) [2024-11-20T18:40:12.827Z] Copying: 869/1024 [MB] (221 MBps) [2024-11-20T18:40:13.400Z] Copying: 1024/1024 [MB] (average 219 MBps) 00:30:54.771 00:30:54.771 Calculate MD5 checksum, iteration 2 00:30:54.771 18:40:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:30:54.771 18:40:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:30:54.771 18:40:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:54.771 18:40:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:54.771 18:40:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:54.771 18:40:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:54.771 18:40:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:54.771 18:40:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:54.771 [2024-11-20 18:40:13.216229] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:30:54.771 [2024-11-20 18:40:13.216364] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83439 ] 00:30:54.771 [2024-11-20 18:40:13.375609] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:55.032 [2024-11-20 18:40:13.466513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:56.418  [2024-11-20T18:40:15.620Z] Copying: 599/1024 [MB] (599 MBps) [2024-11-20T18:40:16.563Z] Copying: 1024/1024 [MB] (average 613 MBps) 00:30:57.935 00:30:57.935 18:40:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:30:57.935 18:40:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:00.486 18:40:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:31:00.486 18:40:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=b75d60474968c27beae4bf5078bf7895 00:31:00.486 18:40:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:31:00.486 18:40:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:31:00.486 18:40:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:31:00.486 [2024-11-20 18:40:18.667071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:00.486 [2024-11-20 18:40:18.667136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:31:00.486 [2024-11-20 18:40:18.667169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:31:00.486 [2024-11-20 18:40:18.667178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:00.486 [2024-11-20 18:40:18.667201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:00.486 [2024-11-20 18:40:18.667217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:31:00.486 [2024-11-20 18:40:18.667225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:00.486 [2024-11-20 18:40:18.667236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:00.486 [2024-11-20 18:40:18.667255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:00.486 [2024-11-20 18:40:18.667264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:31:00.486 [2024-11-20 18:40:18.667271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:31:00.486 [2024-11-20 18:40:18.667278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:00.486 [2024-11-20 18:40:18.667336] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.253 ms, result 0 00:31:00.486 true 00:31:00.486 18:40:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:00.486 { 00:31:00.486 "name": "ftl", 00:31:00.486 "properties": [ 00:31:00.486 { 00:31:00.486 "name": "superblock_version", 00:31:00.486 "value": 5, 00:31:00.486 "read-only": true 00:31:00.486 }, 00:31:00.486 { 00:31:00.486 "name": "base_device", 00:31:00.486 "bands": [ 00:31:00.487 { 00:31:00.487 "id": 0, 00:31:00.487 "state": "FREE", 00:31:00.487 "validity": 0.0 00:31:00.487 }, 00:31:00.487 { 00:31:00.487 "id": 1, 00:31:00.487 "state": "FREE", 00:31:00.487 "validity": 0.0 00:31:00.487 }, 00:31:00.487 { 00:31:00.487 "id": 2, 00:31:00.487 "state": "FREE", 00:31:00.487 "validity": 0.0 00:31:00.487 }, 00:31:00.487 { 00:31:00.487 "id": 3, 00:31:00.487 "state": "FREE", 00:31:00.487 "validity": 0.0 00:31:00.487 }, 00:31:00.487 { 00:31:00.487 "id": 4, 00:31:00.487 "state": "FREE", 00:31:00.487 "validity": 0.0 00:31:00.487 }, 00:31:00.487 { 00:31:00.487 "id": 5, 00:31:00.487 "state": "FREE", 00:31:00.487 "validity": 0.0 00:31:00.487 }, 00:31:00.487 { 00:31:00.487 "id": 6, 00:31:00.487 "state": "FREE", 00:31:00.487 "validity": 0.0 00:31:00.487 }, 00:31:00.487 { 00:31:00.487 "id": 7, 00:31:00.487 "state": "FREE", 00:31:00.487 "validity": 0.0 00:31:00.487 }, 00:31:00.487 { 00:31:00.487 "id": 8, 00:31:00.487 "state": "FREE", 00:31:00.487 "validity": 0.0 00:31:00.487 }, 00:31:00.487 { 00:31:00.487 "id": 9, 00:31:00.487 "state": "FREE", 00:31:00.487 "validity": 0.0 00:31:00.487 }, 00:31:00.487 { 00:31:00.487 "id": 10, 00:31:00.487 "state": "FREE", 00:31:00.487 "validity": 0.0 00:31:00.487 }, 00:31:00.487 { 00:31:00.487 "id": 11, 00:31:00.487 "state": "FREE", 00:31:00.487 "validity": 0.0 00:31:00.487 }, 00:31:00.487 { 00:31:00.487 "id": 12, 00:31:00.487 "state": "FREE", 00:31:00.487 "validity": 0.0 00:31:00.487 }, 00:31:00.487 { 00:31:00.487 "id": 13, 00:31:00.487 "state": "FREE", 00:31:00.487 "validity": 0.0 00:31:00.487 }, 00:31:00.487 { 00:31:00.487 "id": 14, 00:31:00.487 "state": "FREE", 00:31:00.487 "validity": 0.0 00:31:00.487 }, 00:31:00.487 { 00:31:00.487 "id": 15, 00:31:00.487 "state": "FREE", 00:31:00.487 "validity": 0.0 00:31:00.487 }, 00:31:00.487 { 00:31:00.487 "id": 16, 00:31:00.487 "state": "FREE", 00:31:00.487 "validity": 0.0 00:31:00.487 }, 00:31:00.487 { 00:31:00.487 "id": 17, 00:31:00.487 "state": "FREE", 00:31:00.487 "validity": 0.0 00:31:00.487 } 00:31:00.487 ], 00:31:00.487 "read-only": true 00:31:00.487 }, 00:31:00.487 { 00:31:00.487 "name": "cache_device", 00:31:00.487 "type": "bdev", 00:31:00.487 "chunks": [ 00:31:00.487 { 00:31:00.487 "id": 0, 00:31:00.487 "state": "INACTIVE", 00:31:00.487 "utilization": 0.0 00:31:00.487 }, 00:31:00.487 { 00:31:00.487 "id": 1, 00:31:00.487 "state": "CLOSED", 00:31:00.487 "utilization": 1.0 00:31:00.487 }, 00:31:00.487 { 00:31:00.487 "id": 2, 00:31:00.487 "state": "CLOSED", 00:31:00.487 "utilization": 1.0 00:31:00.487 }, 00:31:00.487 { 00:31:00.487 "id": 3, 00:31:00.487 "state": "OPEN", 00:31:00.487 "utilization": 0.001953125 00:31:00.487 }, 00:31:00.487 { 00:31:00.487 "id": 4, 00:31:00.487 "state": "OPEN", 00:31:00.487 "utilization": 0.0 00:31:00.487 } 00:31:00.487 ], 00:31:00.487 "read-only": true 00:31:00.487 }, 00:31:00.487 { 00:31:00.487 "name": "verbose_mode", 00:31:00.487 "value": true, 00:31:00.487 "unit": "", 00:31:00.487 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:31:00.487 }, 00:31:00.487 { 00:31:00.487 "name": "prep_upgrade_on_shutdown", 00:31:00.487 "value": false, 00:31:00.487 "unit": "", 00:31:00.487 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:31:00.487 } 00:31:00.487 ] 00:31:00.487 } 00:31:00.487 18:40:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:31:00.487 [2024-11-20 18:40:19.071590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:00.487 [2024-11-20 18:40:19.071635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:31:00.487 [2024-11-20 18:40:19.071648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:31:00.487 [2024-11-20 18:40:19.071656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:00.487 [2024-11-20 18:40:19.071677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:00.487 [2024-11-20 18:40:19.071685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:31:00.487 [2024-11-20 18:40:19.071693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:00.487 [2024-11-20 18:40:19.071700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:00.487 [2024-11-20 18:40:19.071719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:00.487 [2024-11-20 18:40:19.071726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:31:00.487 [2024-11-20 18:40:19.071734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:31:00.487 [2024-11-20 18:40:19.071741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:00.487 [2024-11-20 18:40:19.071796] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.194 ms, result 0 00:31:00.487 true 00:31:00.487 18:40:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:31:00.487 18:40:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:31:00.487 18:40:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:00.746 18:40:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:31:00.746 18:40:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:31:00.746 18:40:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:31:01.006 [2024-11-20 18:40:19.490689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:01.006 [2024-11-20 18:40:19.490728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:31:01.006 [2024-11-20 18:40:19.490739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:31:01.006 [2024-11-20 18:40:19.490745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:01.006 [2024-11-20 18:40:19.490763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:01.006 [2024-11-20 18:40:19.490769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:31:01.006 [2024-11-20 18:40:19.490776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:31:01.006 [2024-11-20 18:40:19.490781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:01.006 [2024-11-20 18:40:19.490796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:01.006 [2024-11-20 18:40:19.490801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:31:01.006 [2024-11-20 18:40:19.490807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:31:01.006 [2024-11-20 18:40:19.490812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:01.006 [2024-11-20 18:40:19.490857] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.165 ms, result 0 00:31:01.006 true 00:31:01.006 18:40:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:01.267 { 00:31:01.267 "name": "ftl", 00:31:01.267 "properties": [ 00:31:01.267 { 00:31:01.268 "name": "superblock_version", 00:31:01.268 "value": 5, 00:31:01.268 "read-only": true 00:31:01.268 }, 00:31:01.268 { 00:31:01.268 "name": "base_device", 00:31:01.268 "bands": [ 00:31:01.268 { 00:31:01.268 "id": 0, 00:31:01.268 "state": "FREE", 00:31:01.268 "validity": 0.0 00:31:01.268 }, 00:31:01.268 { 00:31:01.268 "id": 1, 00:31:01.268 "state": "FREE", 00:31:01.268 "validity": 0.0 00:31:01.268 }, 00:31:01.268 { 00:31:01.268 "id": 2, 00:31:01.268 "state": "FREE", 00:31:01.268 "validity": 0.0 00:31:01.268 }, 00:31:01.268 { 00:31:01.268 "id": 3, 00:31:01.268 "state": "FREE", 00:31:01.268 "validity": 0.0 00:31:01.268 }, 00:31:01.268 { 00:31:01.268 "id": 4, 00:31:01.268 "state": "FREE", 00:31:01.268 "validity": 0.0 00:31:01.268 }, 00:31:01.268 { 00:31:01.268 "id": 5, 00:31:01.268 "state": "FREE", 00:31:01.268 "validity": 0.0 00:31:01.268 }, 00:31:01.268 { 00:31:01.268 "id": 6, 00:31:01.268 "state": "FREE", 00:31:01.268 "validity": 0.0 00:31:01.268 }, 00:31:01.268 { 00:31:01.268 "id": 7, 00:31:01.268 "state": "FREE", 00:31:01.268 "validity": 0.0 00:31:01.268 }, 00:31:01.268 { 00:31:01.268 "id": 8, 00:31:01.268 "state": "FREE", 00:31:01.268 "validity": 0.0 00:31:01.268 }, 00:31:01.268 { 00:31:01.268 "id": 9, 00:31:01.268 "state": "FREE", 00:31:01.268 "validity": 0.0 00:31:01.268 }, 00:31:01.268 { 00:31:01.268 "id": 10, 00:31:01.268 "state": "FREE", 00:31:01.268 "validity": 0.0 00:31:01.268 }, 00:31:01.268 { 00:31:01.268 "id": 11, 00:31:01.268 "state": "FREE", 00:31:01.268 "validity": 0.0 00:31:01.268 }, 00:31:01.268 { 00:31:01.268 "id": 12, 00:31:01.268 "state": "FREE", 00:31:01.268 "validity": 0.0 00:31:01.268 }, 00:31:01.268 { 00:31:01.268 "id": 13, 00:31:01.268 "state": "FREE", 00:31:01.268 "validity": 0.0 00:31:01.268 }, 00:31:01.268 { 00:31:01.268 "id": 14, 00:31:01.268 "state": "FREE", 00:31:01.268 "validity": 0.0 00:31:01.268 }, 00:31:01.268 { 00:31:01.268 "id": 15, 00:31:01.268 "state": "FREE", 00:31:01.268 "validity": 0.0 00:31:01.268 }, 00:31:01.268 { 00:31:01.268 "id": 16, 00:31:01.268 "state": "FREE", 00:31:01.268 "validity": 0.0 00:31:01.268 }, 00:31:01.268 { 00:31:01.268 "id": 17, 00:31:01.268 "state": "FREE", 00:31:01.268 "validity": 0.0 00:31:01.268 } 00:31:01.268 ], 00:31:01.268 "read-only": true 00:31:01.268 }, 00:31:01.268 { 00:31:01.268 "name": "cache_device", 00:31:01.268 "type": "bdev", 00:31:01.268 "chunks": [ 00:31:01.268 { 00:31:01.268 "id": 0, 00:31:01.268 "state": "INACTIVE", 00:31:01.268 "utilization": 0.0 00:31:01.268 }, 00:31:01.268 { 00:31:01.268 "id": 1, 00:31:01.268 "state": "CLOSED", 00:31:01.268 "utilization": 1.0 00:31:01.268 }, 00:31:01.268 { 00:31:01.268 "id": 2, 00:31:01.268 "state": "CLOSED", 00:31:01.268 "utilization": 1.0 00:31:01.268 }, 00:31:01.268 { 00:31:01.268 "id": 3, 00:31:01.268 "state": "OPEN", 00:31:01.268 "utilization": 0.001953125 00:31:01.268 }, 00:31:01.268 { 00:31:01.268 "id": 4, 00:31:01.268 "state": "OPEN", 00:31:01.268 "utilization": 0.0 00:31:01.268 } 00:31:01.268 ], 00:31:01.268 "read-only": true 00:31:01.268 }, 00:31:01.268 { 00:31:01.268 "name": "verbose_mode", 00:31:01.268 "value": true, 00:31:01.268 "unit": "", 00:31:01.268 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:31:01.268 }, 00:31:01.268 { 00:31:01.268 "name": "prep_upgrade_on_shutdown", 00:31:01.268 "value": true, 00:31:01.268 "unit": "", 00:31:01.268 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:31:01.268 } 00:31:01.268 ] 00:31:01.268 } 00:31:01.268 18:40:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:31:01.268 18:40:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 83091 ]] 00:31:01.268 18:40:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 83091 00:31:01.268 18:40:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83091 ']' 00:31:01.268 18:40:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83091 00:31:01.268 18:40:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:31:01.268 18:40:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:01.268 18:40:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83091 00:31:01.268 killing process with pid 83091 00:31:01.268 18:40:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:01.268 18:40:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:01.268 18:40:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83091' 00:31:01.268 18:40:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83091 00:31:01.268 18:40:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83091 00:31:02.211 [2024-11-20 18:40:20.495546] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:31:02.211 [2024-11-20 18:40:20.506445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:02.211 [2024-11-20 18:40:20.506483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:31:02.211 [2024-11-20 18:40:20.506495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:02.211 [2024-11-20 18:40:20.506501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:02.211 [2024-11-20 18:40:20.506530] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:31:02.211 [2024-11-20 18:40:20.508748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:02.211 [2024-11-20 18:40:20.508775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:31:02.211 [2024-11-20 18:40:20.508784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.206 ms 00:31:02.211 [2024-11-20 18:40:20.508791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:10.340 [2024-11-20 18:40:27.924378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:10.340 [2024-11-20 18:40:27.924468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:31:10.340 [2024-11-20 18:40:27.924489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7415.537 ms 00:31:10.340 [2024-11-20 18:40:27.924500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:10.340 [2024-11-20 18:40:27.926296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:10.340 [2024-11-20 18:40:27.926352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:31:10.340 [2024-11-20 18:40:27.926367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.767 ms 00:31:10.340 [2024-11-20 18:40:27.926376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:10.340 [2024-11-20 18:40:27.927556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:10.340 [2024-11-20 18:40:27.927583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:31:10.340 [2024-11-20 18:40:27.927595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.140 ms 00:31:10.340 [2024-11-20 18:40:27.927605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:10.340 [2024-11-20 18:40:27.939590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:10.340 [2024-11-20 18:40:27.939642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:31:10.340 [2024-11-20 18:40:27.939655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.925 ms 00:31:10.340 [2024-11-20 18:40:27.939665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:10.340 [2024-11-20 18:40:27.947246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:10.340 [2024-11-20 18:40:27.947299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:31:10.340 [2024-11-20 18:40:27.947313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.528 ms 00:31:10.340 [2024-11-20 18:40:27.947323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:10.340 [2024-11-20 18:40:27.947447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:10.340 [2024-11-20 18:40:27.947461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:31:10.340 [2024-11-20 18:40:27.947473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.071 ms 00:31:10.340 [2024-11-20 18:40:27.947491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:10.340 [2024-11-20 18:40:27.957476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:10.340 [2024-11-20 18:40:27.957506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:31:10.340 [2024-11-20 18:40:27.957516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.969 ms 00:31:10.340 [2024-11-20 18:40:27.957524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:10.340 [2024-11-20 18:40:27.967616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:10.340 [2024-11-20 18:40:27.967645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:31:10.340 [2024-11-20 18:40:27.967655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.060 ms 00:31:10.340 [2024-11-20 18:40:27.967662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:10.340 [2024-11-20 18:40:27.977318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:10.340 [2024-11-20 18:40:27.977347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:31:10.340 [2024-11-20 18:40:27.977357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.624 ms 00:31:10.340 [2024-11-20 18:40:27.977364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:10.340 [2024-11-20 18:40:27.987188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:10.340 [2024-11-20 18:40:27.987219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:31:10.340 [2024-11-20 18:40:27.987229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.764 ms 00:31:10.340 [2024-11-20 18:40:27.987236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:10.340 [2024-11-20 18:40:27.987267] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:31:10.340 [2024-11-20 18:40:27.987282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:31:10.340 [2024-11-20 18:40:27.987294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:31:10.340 [2024-11-20 18:40:27.987312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:31:10.340 [2024-11-20 18:40:27.987321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:10.340 [2024-11-20 18:40:27.987329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:10.340 [2024-11-20 18:40:27.987337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:10.340 [2024-11-20 18:40:27.987344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:10.340 [2024-11-20 18:40:27.987352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:10.340 [2024-11-20 18:40:27.987360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:10.340 [2024-11-20 18:40:27.987368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:10.340 [2024-11-20 18:40:27.987376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:10.340 [2024-11-20 18:40:27.987384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:10.340 [2024-11-20 18:40:27.987391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:10.340 [2024-11-20 18:40:27.987399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:10.340 [2024-11-20 18:40:27.987407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:10.340 [2024-11-20 18:40:27.987414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:10.341 [2024-11-20 18:40:27.987421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:10.341 [2024-11-20 18:40:27.987428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:10.341 [2024-11-20 18:40:27.987438] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:31:10.341 [2024-11-20 18:40:27.987446] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: fc067ecb-7c25-46bc-a976-5179fb694498 00:31:10.341 [2024-11-20 18:40:27.987454] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:31:10.341 [2024-11-20 18:40:27.987461] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:31:10.341 [2024-11-20 18:40:27.987467] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:31:10.341 [2024-11-20 18:40:27.987475] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:31:10.341 [2024-11-20 18:40:27.987484] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:31:10.341 [2024-11-20 18:40:27.987491] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:31:10.341 [2024-11-20 18:40:27.987501] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:31:10.341 [2024-11-20 18:40:27.987507] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:31:10.341 [2024-11-20 18:40:27.987514] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:31:10.341 [2024-11-20 18:40:27.987521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:10.341 [2024-11-20 18:40:27.987529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:31:10.341 [2024-11-20 18:40:27.987544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.255 ms 00:31:10.341 [2024-11-20 18:40:27.987552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:10.341 [2024-11-20 18:40:28.000743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:10.341 [2024-11-20 18:40:28.000917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:31:10.341 [2024-11-20 18:40:28.000934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.176 ms 00:31:10.341 [2024-11-20 18:40:28.000942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:10.341 [2024-11-20 18:40:28.001335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:10.341 [2024-11-20 18:40:28.001346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:31:10.341 [2024-11-20 18:40:28.001355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.368 ms 00:31:10.341 [2024-11-20 18:40:28.001363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:10.341 [2024-11-20 18:40:28.046401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:10.341 [2024-11-20 18:40:28.046436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:31:10.341 [2024-11-20 18:40:28.046446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:10.341 [2024-11-20 18:40:28.046459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:10.341 [2024-11-20 18:40:28.046498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:10.341 [2024-11-20 18:40:28.046507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:31:10.341 [2024-11-20 18:40:28.046515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:10.341 [2024-11-20 18:40:28.046522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:10.341 [2024-11-20 18:40:28.046586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:10.341 [2024-11-20 18:40:28.046597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:31:10.341 [2024-11-20 18:40:28.046606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:10.341 [2024-11-20 18:40:28.046614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:10.341 [2024-11-20 18:40:28.046633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:10.341 [2024-11-20 18:40:28.046642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:31:10.341 [2024-11-20 18:40:28.046650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:10.341 [2024-11-20 18:40:28.046658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:10.341 [2024-11-20 18:40:28.130351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:10.341 [2024-11-20 18:40:28.130545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:31:10.341 [2024-11-20 18:40:28.130563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:10.341 [2024-11-20 18:40:28.130571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:10.341 [2024-11-20 18:40:28.196783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:10.341 [2024-11-20 18:40:28.196825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:31:10.341 [2024-11-20 18:40:28.196838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:10.341 [2024-11-20 18:40:28.196845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:10.341 [2024-11-20 18:40:28.196940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:10.341 [2024-11-20 18:40:28.196950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:10.341 [2024-11-20 18:40:28.196958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:10.341 [2024-11-20 18:40:28.196966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:10.341 [2024-11-20 18:40:28.197009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:10.341 [2024-11-20 18:40:28.197024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:10.341 [2024-11-20 18:40:28.197032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:10.341 [2024-11-20 18:40:28.197040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:10.341 [2024-11-20 18:40:28.197173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:10.341 [2024-11-20 18:40:28.197184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:10.341 [2024-11-20 18:40:28.197194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:10.341 [2024-11-20 18:40:28.197202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:10.341 [2024-11-20 18:40:28.197235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:10.341 [2024-11-20 18:40:28.197244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:31:10.341 [2024-11-20 18:40:28.197255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:10.341 [2024-11-20 18:40:28.197263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:10.341 [2024-11-20 18:40:28.197304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:10.341 [2024-11-20 18:40:28.197313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:10.341 [2024-11-20 18:40:28.197321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:10.341 [2024-11-20 18:40:28.197330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:10.341 [2024-11-20 18:40:28.197377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:10.341 [2024-11-20 18:40:28.197391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:10.341 [2024-11-20 18:40:28.197399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:10.341 [2024-11-20 18:40:28.197407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:10.341 [2024-11-20 18:40:28.197535] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7691.034 ms, result 0 00:31:14.547 18:40:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:31:14.547 18:40:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:31:14.547 18:40:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:31:14.547 18:40:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:31:14.547 18:40:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:14.547 18:40:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=83623 00:31:14.547 18:40:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:31:14.547 18:40:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 83623 00:31:14.547 18:40:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:14.547 18:40:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83623 ']' 00:31:14.547 18:40:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:14.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:14.547 18:40:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:14.547 18:40:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:14.547 18:40:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:14.547 18:40:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:14.547 [2024-11-20 18:40:32.520333] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:31:14.547 [2024-11-20 18:40:32.520454] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83623 ] 00:31:14.547 [2024-11-20 18:40:32.674600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:14.547 [2024-11-20 18:40:32.761512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:14.809 [2024-11-20 18:40:33.389444] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:31:14.809 [2024-11-20 18:40:33.389680] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:31:15.072 [2024-11-20 18:40:33.538506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:15.072 [2024-11-20 18:40:33.538541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:31:15.072 [2024-11-20 18:40:33.538553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:15.072 [2024-11-20 18:40:33.538559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.072 [2024-11-20 18:40:33.538604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:15.072 [2024-11-20 18:40:33.538612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:15.072 [2024-11-20 18:40:33.538619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:31:15.072 [2024-11-20 18:40:33.538625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.072 [2024-11-20 18:40:33.538644] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:31:15.072 [2024-11-20 18:40:33.539180] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:31:15.072 [2024-11-20 18:40:33.539197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:15.072 [2024-11-20 18:40:33.539204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:15.072 [2024-11-20 18:40:33.539211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.560 ms 00:31:15.072 [2024-11-20 18:40:33.539218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.072 [2024-11-20 18:40:33.540507] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:31:15.072 [2024-11-20 18:40:33.551447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:15.072 [2024-11-20 18:40:33.551475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:31:15.073 [2024-11-20 18:40:33.551489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.942 ms 00:31:15.073 [2024-11-20 18:40:33.551495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.073 [2024-11-20 18:40:33.551544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:15.073 [2024-11-20 18:40:33.551552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:31:15.073 [2024-11-20 18:40:33.551559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:31:15.073 [2024-11-20 18:40:33.551565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.073 [2024-11-20 18:40:33.557919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:15.073 [2024-11-20 18:40:33.557949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:15.073 [2024-11-20 18:40:33.557957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.302 ms 00:31:15.073 [2024-11-20 18:40:33.557963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.073 [2024-11-20 18:40:33.558011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:15.073 [2024-11-20 18:40:33.558018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:15.073 [2024-11-20 18:40:33.558025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:31:15.073 [2024-11-20 18:40:33.558031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.073 [2024-11-20 18:40:33.558070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:15.073 [2024-11-20 18:40:33.558077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:31:15.073 [2024-11-20 18:40:33.558087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:31:15.073 [2024-11-20 18:40:33.558112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.073 [2024-11-20 18:40:33.558129] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:31:15.073 [2024-11-20 18:40:33.561196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:15.073 [2024-11-20 18:40:33.561220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:15.073 [2024-11-20 18:40:33.561227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.071 ms 00:31:15.073 [2024-11-20 18:40:33.561235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.073 [2024-11-20 18:40:33.561261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:15.073 [2024-11-20 18:40:33.561267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:31:15.073 [2024-11-20 18:40:33.561274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:31:15.073 [2024-11-20 18:40:33.561280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.073 [2024-11-20 18:40:33.561296] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:31:15.073 [2024-11-20 18:40:33.561312] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:31:15.073 [2024-11-20 18:40:33.561344] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:31:15.073 [2024-11-20 18:40:33.561357] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:31:15.073 [2024-11-20 18:40:33.561439] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:31:15.073 [2024-11-20 18:40:33.561447] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:31:15.073 [2024-11-20 18:40:33.561455] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:31:15.073 [2024-11-20 18:40:33.561463] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:31:15.073 [2024-11-20 18:40:33.561471] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:31:15.073 [2024-11-20 18:40:33.561480] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:31:15.073 [2024-11-20 18:40:33.561486] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:31:15.073 [2024-11-20 18:40:33.561491] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:31:15.073 [2024-11-20 18:40:33.561497] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:31:15.073 [2024-11-20 18:40:33.561503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:15.073 [2024-11-20 18:40:33.561509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:31:15.073 [2024-11-20 18:40:33.561514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.209 ms 00:31:15.073 [2024-11-20 18:40:33.561520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.073 [2024-11-20 18:40:33.561586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:15.073 [2024-11-20 18:40:33.561593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:31:15.073 [2024-11-20 18:40:33.561599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.054 ms 00:31:15.073 [2024-11-20 18:40:33.561606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.073 [2024-11-20 18:40:33.561690] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:31:15.073 [2024-11-20 18:40:33.561698] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:31:15.073 [2024-11-20 18:40:33.561705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:15.073 [2024-11-20 18:40:33.561713] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:15.073 [2024-11-20 18:40:33.561719] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:31:15.073 [2024-11-20 18:40:33.561725] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:31:15.073 [2024-11-20 18:40:33.561731] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:31:15.073 [2024-11-20 18:40:33.561737] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:31:15.073 [2024-11-20 18:40:33.561744] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:31:15.073 [2024-11-20 18:40:33.561751] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:15.073 [2024-11-20 18:40:33.561756] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:31:15.073 [2024-11-20 18:40:33.561762] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:31:15.073 [2024-11-20 18:40:33.561768] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:15.073 [2024-11-20 18:40:33.561774] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:31:15.073 [2024-11-20 18:40:33.561780] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:31:15.073 [2024-11-20 18:40:33.561785] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:15.073 [2024-11-20 18:40:33.561790] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:31:15.073 [2024-11-20 18:40:33.561796] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:31:15.073 [2024-11-20 18:40:33.561801] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:15.073 [2024-11-20 18:40:33.561807] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:31:15.073 [2024-11-20 18:40:33.561812] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:31:15.073 [2024-11-20 18:40:33.561817] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:15.073 [2024-11-20 18:40:33.561822] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:31:15.073 [2024-11-20 18:40:33.561827] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:31:15.073 [2024-11-20 18:40:33.561832] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:15.073 [2024-11-20 18:40:33.561844] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:31:15.073 [2024-11-20 18:40:33.561849] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:31:15.073 [2024-11-20 18:40:33.561854] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:15.073 [2024-11-20 18:40:33.561859] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:31:15.073 [2024-11-20 18:40:33.561864] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:31:15.073 [2024-11-20 18:40:33.561869] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:15.073 [2024-11-20 18:40:33.561874] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:31:15.073 [2024-11-20 18:40:33.561879] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:31:15.073 [2024-11-20 18:40:33.561884] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:15.073 [2024-11-20 18:40:33.561890] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:31:15.073 [2024-11-20 18:40:33.561896] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:31:15.073 [2024-11-20 18:40:33.561900] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:15.073 [2024-11-20 18:40:33.561905] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:31:15.073 [2024-11-20 18:40:33.561911] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:31:15.073 [2024-11-20 18:40:33.561915] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:15.073 [2024-11-20 18:40:33.561920] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:31:15.073 [2024-11-20 18:40:33.561925] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:31:15.073 [2024-11-20 18:40:33.561930] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:15.073 [2024-11-20 18:40:33.561935] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:31:15.073 [2024-11-20 18:40:33.561947] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:31:15.073 [2024-11-20 18:40:33.561960] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:15.073 [2024-11-20 18:40:33.561965] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:15.073 [2024-11-20 18:40:33.561973] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:31:15.073 [2024-11-20 18:40:33.561979] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:31:15.073 [2024-11-20 18:40:33.561989] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:31:15.073 [2024-11-20 18:40:33.561995] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:31:15.073 [2024-11-20 18:40:33.562000] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:31:15.073 [2024-11-20 18:40:33.562006] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:31:15.073 [2024-11-20 18:40:33.562012] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:31:15.073 [2024-11-20 18:40:33.562025] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:15.074 [2024-11-20 18:40:33.562031] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:31:15.074 [2024-11-20 18:40:33.562037] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:31:15.074 [2024-11-20 18:40:33.562046] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:31:15.074 [2024-11-20 18:40:33.562052] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:31:15.074 [2024-11-20 18:40:33.562057] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:31:15.074 [2024-11-20 18:40:33.562066] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:31:15.074 [2024-11-20 18:40:33.562072] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:31:15.074 [2024-11-20 18:40:33.562077] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:31:15.074 [2024-11-20 18:40:33.562087] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:31:15.074 [2024-11-20 18:40:33.562338] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:31:15.074 [2024-11-20 18:40:33.562385] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:31:15.074 [2024-11-20 18:40:33.562418] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:31:15.074 [2024-11-20 18:40:33.562449] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:31:15.074 [2024-11-20 18:40:33.562472] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:31:15.074 [2024-11-20 18:40:33.562494] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:31:15.074 [2024-11-20 18:40:33.562522] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:15.074 [2024-11-20 18:40:33.562545] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:15.074 [2024-11-20 18:40:33.562610] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:31:15.074 [2024-11-20 18:40:33.562636] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:31:15.074 [2024-11-20 18:40:33.562658] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:31:15.074 [2024-11-20 18:40:33.562681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:15.074 [2024-11-20 18:40:33.562701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:31:15.074 [2024-11-20 18:40:33.562717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.044 ms 00:31:15.074 [2024-11-20 18:40:33.562732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.074 [2024-11-20 18:40:33.562828] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:31:15.074 [2024-11-20 18:40:33.562860] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:31:19.284 [2024-11-20 18:40:37.344347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.284 [2024-11-20 18:40:37.344578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:31:19.284 [2024-11-20 18:40:37.344637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3781.506 ms 00:31:19.284 [2024-11-20 18:40:37.344657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.284 [2024-11-20 18:40:37.368070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.284 [2024-11-20 18:40:37.368228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:31:19.284 [2024-11-20 18:40:37.368281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.228 ms 00:31:19.284 [2024-11-20 18:40:37.368301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.284 [2024-11-20 18:40:37.368360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.284 [2024-11-20 18:40:37.368372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:31:19.284 [2024-11-20 18:40:37.368379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:31:19.284 [2024-11-20 18:40:37.368386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.284 [2024-11-20 18:40:37.394760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.284 [2024-11-20 18:40:37.394791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:31:19.284 [2024-11-20 18:40:37.394800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.339 ms 00:31:19.284 [2024-11-20 18:40:37.394808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.284 [2024-11-20 18:40:37.394834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.284 [2024-11-20 18:40:37.394841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:31:19.284 [2024-11-20 18:40:37.394848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:19.284 [2024-11-20 18:40:37.394854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.284 [2024-11-20 18:40:37.395285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.284 [2024-11-20 18:40:37.395299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:31:19.284 [2024-11-20 18:40:37.395307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.394 ms 00:31:19.284 [2024-11-20 18:40:37.395313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.284 [2024-11-20 18:40:37.395355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.284 [2024-11-20 18:40:37.395362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:31:19.284 [2024-11-20 18:40:37.395369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:31:19.284 [2024-11-20 18:40:37.395375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.284 [2024-11-20 18:40:37.408417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.284 [2024-11-20 18:40:37.408443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:31:19.284 [2024-11-20 18:40:37.408451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.023 ms 00:31:19.284 [2024-11-20 18:40:37.408457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.284 [2024-11-20 18:40:37.419072] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:31:19.284 [2024-11-20 18:40:37.419111] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:31:19.284 [2024-11-20 18:40:37.419121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.284 [2024-11-20 18:40:37.419128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:31:19.284 [2024-11-20 18:40:37.419135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.584 ms 00:31:19.285 [2024-11-20 18:40:37.419140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.285 [2024-11-20 18:40:37.429759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.285 [2024-11-20 18:40:37.429884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:31:19.285 [2024-11-20 18:40:37.429898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.588 ms 00:31:19.285 [2024-11-20 18:40:37.429905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.285 [2024-11-20 18:40:37.438763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.285 [2024-11-20 18:40:37.438787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:31:19.285 [2024-11-20 18:40:37.438795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.828 ms 00:31:19.285 [2024-11-20 18:40:37.438801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.285 [2024-11-20 18:40:37.447641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.285 [2024-11-20 18:40:37.447665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:31:19.285 [2024-11-20 18:40:37.447673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.811 ms 00:31:19.285 [2024-11-20 18:40:37.447679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.285 [2024-11-20 18:40:37.448163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.285 [2024-11-20 18:40:37.448178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:31:19.285 [2024-11-20 18:40:37.448186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.423 ms 00:31:19.285 [2024-11-20 18:40:37.448192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.285 [2024-11-20 18:40:37.505371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.285 [2024-11-20 18:40:37.507144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:31:19.285 [2024-11-20 18:40:37.507163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 57.163 ms 00:31:19.285 [2024-11-20 18:40:37.507172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.285 [2024-11-20 18:40:37.515426] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:31:19.285 [2024-11-20 18:40:37.516205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.285 [2024-11-20 18:40:37.516227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:31:19.285 [2024-11-20 18:40:37.516236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.859 ms 00:31:19.285 [2024-11-20 18:40:37.516242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.285 [2024-11-20 18:40:37.516308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.285 [2024-11-20 18:40:37.516319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:31:19.285 [2024-11-20 18:40:37.516326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:31:19.285 [2024-11-20 18:40:37.516333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.285 [2024-11-20 18:40:37.516370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.285 [2024-11-20 18:40:37.516378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:31:19.285 [2024-11-20 18:40:37.516386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:31:19.285 [2024-11-20 18:40:37.516392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.285 [2024-11-20 18:40:37.516410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.285 [2024-11-20 18:40:37.516416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:31:19.285 [2024-11-20 18:40:37.516422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:31:19.285 [2024-11-20 18:40:37.516431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.285 [2024-11-20 18:40:37.516459] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:31:19.285 [2024-11-20 18:40:37.516467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.285 [2024-11-20 18:40:37.516474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:31:19.285 [2024-11-20 18:40:37.516481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:31:19.285 [2024-11-20 18:40:37.516487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.285 [2024-11-20 18:40:37.534381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.285 [2024-11-20 18:40:37.534502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:31:19.285 [2024-11-20 18:40:37.534515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.880 ms 00:31:19.285 [2024-11-20 18:40:37.534522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.285 [2024-11-20 18:40:37.534580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.285 [2024-11-20 18:40:37.534587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:31:19.285 [2024-11-20 18:40:37.534595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:31:19.285 [2024-11-20 18:40:37.534601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.285 [2024-11-20 18:40:37.535697] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3996.798 ms, result 0 00:31:19.285 [2024-11-20 18:40:37.550766] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:19.285 [2024-11-20 18:40:37.566767] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:31:19.285 [2024-11-20 18:40:37.574899] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:19.285 18:40:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:19.285 18:40:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:31:19.285 18:40:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:19.285 18:40:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:31:19.285 18:40:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:31:19.285 [2024-11-20 18:40:37.798887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.285 [2024-11-20 18:40:37.798918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:31:19.285 [2024-11-20 18:40:37.798927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:31:19.285 [2024-11-20 18:40:37.798936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.285 [2024-11-20 18:40:37.798953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.285 [2024-11-20 18:40:37.798960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:31:19.285 [2024-11-20 18:40:37.798967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:19.285 [2024-11-20 18:40:37.798973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.285 [2024-11-20 18:40:37.798989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:19.285 [2024-11-20 18:40:37.798995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:31:19.285 [2024-11-20 18:40:37.799002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:31:19.285 [2024-11-20 18:40:37.799008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:19.285 [2024-11-20 18:40:37.799052] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.155 ms, result 0 00:31:19.285 true 00:31:19.285 18:40:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:19.547 { 00:31:19.547 "name": "ftl", 00:31:19.547 "properties": [ 00:31:19.547 { 00:31:19.547 "name": "superblock_version", 00:31:19.547 "value": 5, 00:31:19.547 "read-only": true 00:31:19.547 }, 00:31:19.547 { 00:31:19.547 "name": "base_device", 00:31:19.547 "bands": [ 00:31:19.547 { 00:31:19.547 "id": 0, 00:31:19.547 "state": "CLOSED", 00:31:19.547 "validity": 1.0 00:31:19.547 }, 00:31:19.547 { 00:31:19.547 "id": 1, 00:31:19.547 "state": "CLOSED", 00:31:19.547 "validity": 1.0 00:31:19.547 }, 00:31:19.547 { 00:31:19.547 "id": 2, 00:31:19.547 "state": "CLOSED", 00:31:19.547 "validity": 0.007843137254901933 00:31:19.547 }, 00:31:19.547 { 00:31:19.547 "id": 3, 00:31:19.547 "state": "FREE", 00:31:19.547 "validity": 0.0 00:31:19.547 }, 00:31:19.547 { 00:31:19.547 "id": 4, 00:31:19.547 "state": "FREE", 00:31:19.547 "validity": 0.0 00:31:19.547 }, 00:31:19.547 { 00:31:19.547 "id": 5, 00:31:19.547 "state": "FREE", 00:31:19.547 "validity": 0.0 00:31:19.547 }, 00:31:19.547 { 00:31:19.547 "id": 6, 00:31:19.547 "state": "FREE", 00:31:19.547 "validity": 0.0 00:31:19.547 }, 00:31:19.547 { 00:31:19.547 "id": 7, 00:31:19.547 "state": "FREE", 00:31:19.547 "validity": 0.0 00:31:19.547 }, 00:31:19.547 { 00:31:19.547 "id": 8, 00:31:19.547 "state": "FREE", 00:31:19.547 "validity": 0.0 00:31:19.547 }, 00:31:19.547 { 00:31:19.547 "id": 9, 00:31:19.547 "state": "FREE", 00:31:19.547 "validity": 0.0 00:31:19.547 }, 00:31:19.547 { 00:31:19.547 "id": 10, 00:31:19.547 "state": "FREE", 00:31:19.547 "validity": 0.0 00:31:19.547 }, 00:31:19.547 { 00:31:19.547 "id": 11, 00:31:19.547 "state": "FREE", 00:31:19.547 "validity": 0.0 00:31:19.547 }, 00:31:19.547 { 00:31:19.547 "id": 12, 00:31:19.547 "state": "FREE", 00:31:19.547 "validity": 0.0 00:31:19.547 }, 00:31:19.547 { 00:31:19.547 "id": 13, 00:31:19.547 "state": "FREE", 00:31:19.547 "validity": 0.0 00:31:19.547 }, 00:31:19.547 { 00:31:19.547 "id": 14, 00:31:19.547 "state": "FREE", 00:31:19.547 "validity": 0.0 00:31:19.547 }, 00:31:19.547 { 00:31:19.547 "id": 15, 00:31:19.547 "state": "FREE", 00:31:19.547 "validity": 0.0 00:31:19.547 }, 00:31:19.547 { 00:31:19.547 "id": 16, 00:31:19.547 "state": "FREE", 00:31:19.547 "validity": 0.0 00:31:19.547 }, 00:31:19.547 { 00:31:19.547 "id": 17, 00:31:19.547 "state": "FREE", 00:31:19.547 "validity": 0.0 00:31:19.547 } 00:31:19.547 ], 00:31:19.547 "read-only": true 00:31:19.547 }, 00:31:19.547 { 00:31:19.547 "name": "cache_device", 00:31:19.547 "type": "bdev", 00:31:19.547 "chunks": [ 00:31:19.547 { 00:31:19.547 "id": 0, 00:31:19.547 "state": "INACTIVE", 00:31:19.547 "utilization": 0.0 00:31:19.547 }, 00:31:19.547 { 00:31:19.547 "id": 1, 00:31:19.547 "state": "OPEN", 00:31:19.547 "utilization": 0.0 00:31:19.547 }, 00:31:19.547 { 00:31:19.547 "id": 2, 00:31:19.547 "state": "OPEN", 00:31:19.547 "utilization": 0.0 00:31:19.547 }, 00:31:19.547 { 00:31:19.547 "id": 3, 00:31:19.547 "state": "FREE", 00:31:19.547 "utilization": 0.0 00:31:19.547 }, 00:31:19.547 { 00:31:19.547 "id": 4, 00:31:19.547 "state": "FREE", 00:31:19.547 "utilization": 0.0 00:31:19.547 } 00:31:19.547 ], 00:31:19.547 "read-only": true 00:31:19.547 }, 00:31:19.547 { 00:31:19.547 "name": "verbose_mode", 00:31:19.547 "value": true, 00:31:19.547 "unit": "", 00:31:19.547 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:31:19.547 }, 00:31:19.547 { 00:31:19.547 "name": "prep_upgrade_on_shutdown", 00:31:19.547 "value": false, 00:31:19.547 "unit": "", 00:31:19.547 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:31:19.547 } 00:31:19.547 ] 00:31:19.547 } 00:31:19.547 18:40:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:31:19.547 18:40:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:19.547 18:40:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:31:19.810 18:40:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:31:19.810 18:40:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:31:19.810 18:40:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:31:19.810 18:40:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:31:19.810 18:40:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:19.810 Validate MD5 checksum, iteration 1 00:31:19.810 18:40:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:31:19.810 18:40:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:31:19.810 18:40:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:31:19.810 18:40:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:31:19.810 18:40:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:31:19.810 18:40:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:19.810 18:40:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:31:19.810 18:40:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:19.810 18:40:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:19.810 18:40:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:19.810 18:40:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:19.810 18:40:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:19.810 18:40:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:20.071 [2024-11-20 18:40:38.494370] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:31:20.071 [2024-11-20 18:40:38.494497] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83704 ] 00:31:20.071 [2024-11-20 18:40:38.656576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:20.332 [2024-11-20 18:40:38.750400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:21.717  [2024-11-20T18:40:40.918Z] Copying: 626/1024 [MB] (626 MBps) [2024-11-20T18:40:41.856Z] Copying: 1024/1024 [MB] (average 630 MBps) 00:31:23.227 00:31:23.485 18:40:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:31:23.485 18:40:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:25.389 18:40:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:31:25.651 18:40:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=38c3e805980537d77425176bd97a7bb1 00:31:25.651 Validate MD5 checksum, iteration 2 00:31:25.651 18:40:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 38c3e805980537d77425176bd97a7bb1 != \3\8\c\3\e\8\0\5\9\8\0\5\3\7\d\7\7\4\2\5\1\7\6\b\d\9\7\a\7\b\b\1 ]] 00:31:25.651 18:40:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:31:25.651 18:40:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:25.651 18:40:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:31:25.651 18:40:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:25.651 18:40:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:25.651 18:40:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:25.651 18:40:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:25.651 18:40:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:25.651 18:40:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:25.651 [2024-11-20 18:40:44.091787] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:31:25.651 [2024-11-20 18:40:44.091932] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83771 ] 00:31:25.651 [2024-11-20 18:40:44.252949] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:25.912 [2024-11-20 18:40:44.357119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:27.300  [2024-11-20T18:40:46.503Z] Copying: 692/1024 [MB] (692 MBps) [2024-11-20T18:40:47.076Z] Copying: 1024/1024 [MB] (average 692 MBps) 00:31:28.447 00:31:28.447 18:40:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:31:28.447 18:40:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:30.994 18:40:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:31:30.994 18:40:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=b75d60474968c27beae4bf5078bf7895 00:31:30.994 18:40:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ b75d60474968c27beae4bf5078bf7895 != \b\7\5\d\6\0\4\7\4\9\6\8\c\2\7\b\e\a\e\4\b\f\5\0\7\8\b\f\7\8\9\5 ]] 00:31:30.994 18:40:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:31:30.994 18:40:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:30.994 18:40:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:31:30.994 18:40:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 83623 ]] 00:31:30.994 18:40:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 83623 00:31:30.994 18:40:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:31:30.994 18:40:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:31:30.994 18:40:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:31:30.994 18:40:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:31:30.994 18:40:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:30.994 18:40:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=83828 00:31:30.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:30.994 18:40:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:31:30.994 18:40:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 83828 00:31:30.994 18:40:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:30.994 18:40:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83828 ']' 00:31:30.994 18:40:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:30.994 18:40:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:30.994 18:40:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:30.994 18:40:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:30.994 18:40:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:30.994 [2024-11-20 18:40:49.161419] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:31:30.994 [2024-11-20 18:40:49.161674] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83828 ] 00:31:30.994 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 83623 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:31:30.994 [2024-11-20 18:40:49.308467] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:30.994 [2024-11-20 18:40:49.397544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:31.567 [2024-11-20 18:40:50.025731] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:31:31.567 [2024-11-20 18:40:50.025788] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:31:31.567 [2024-11-20 18:40:50.174798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:31.567 [2024-11-20 18:40:50.174962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:31:31.567 [2024-11-20 18:40:50.174982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:31.567 [2024-11-20 18:40:50.174989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:31.567 [2024-11-20 18:40:50.175042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:31.567 [2024-11-20 18:40:50.175051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:31.567 [2024-11-20 18:40:50.175057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:31:31.567 [2024-11-20 18:40:50.175064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:31.567 [2024-11-20 18:40:50.175085] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:31:31.567 [2024-11-20 18:40:50.175667] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:31:31.567 [2024-11-20 18:40:50.175682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:31.567 [2024-11-20 18:40:50.175689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:31.567 [2024-11-20 18:40:50.175697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.604 ms 00:31:31.567 [2024-11-20 18:40:50.175703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:31.567 [2024-11-20 18:40:50.175957] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:31:31.567 [2024-11-20 18:40:50.189792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:31.567 [2024-11-20 18:40:50.189924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:31:31.567 [2024-11-20 18:40:50.189940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.835 ms 00:31:31.567 [2024-11-20 18:40:50.189947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:31.829 [2024-11-20 18:40:50.196929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:31.829 [2024-11-20 18:40:50.197016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:31:31.829 [2024-11-20 18:40:50.197067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:31:31.829 [2024-11-20 18:40:50.197086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:31.829 [2024-11-20 18:40:50.197372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:31.829 [2024-11-20 18:40:50.197645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:31.829 [2024-11-20 18:40:50.197683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.195 ms 00:31:31.829 [2024-11-20 18:40:50.197700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:31.829 [2024-11-20 18:40:50.197767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:31.829 [2024-11-20 18:40:50.197790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:31.829 [2024-11-20 18:40:50.197807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:31:31.829 [2024-11-20 18:40:50.197821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:31.829 [2024-11-20 18:40:50.197857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:31.829 [2024-11-20 18:40:50.197876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:31:31.829 [2024-11-20 18:40:50.197946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:31:31.829 [2024-11-20 18:40:50.197964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:31.829 [2024-11-20 18:40:50.197995] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:31:31.829 [2024-11-20 18:40:50.200456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:31.829 [2024-11-20 18:40:50.200552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:31.829 [2024-11-20 18:40:50.200600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.466 ms 00:31:31.829 [2024-11-20 18:40:50.200617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:31.829 [2024-11-20 18:40:50.200656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:31.829 [2024-11-20 18:40:50.200673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:31:31.829 [2024-11-20 18:40:50.200689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:31:31.829 [2024-11-20 18:40:50.200704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:31.829 [2024-11-20 18:40:50.200730] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:31:31.829 [2024-11-20 18:40:50.200755] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:31:31.829 [2024-11-20 18:40:50.200862] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:31:31.829 [2024-11-20 18:40:50.200931] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:31:31.829 [2024-11-20 18:40:50.201085] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:31:31.829 [2024-11-20 18:40:50.201131] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:31:31.830 [2024-11-20 18:40:50.201312] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:31:31.830 [2024-11-20 18:40:50.201338] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:31:31.830 [2024-11-20 18:40:50.201363] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:31:31.830 [2024-11-20 18:40:50.201387] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:31:31.830 [2024-11-20 18:40:50.201402] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:31:31.830 [2024-11-20 18:40:50.201418] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:31:31.830 [2024-11-20 18:40:50.201483] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:31:31.830 [2024-11-20 18:40:50.201503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:31.830 [2024-11-20 18:40:50.201523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:31:31.830 [2024-11-20 18:40:50.201539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.776 ms 00:31:31.830 [2024-11-20 18:40:50.201554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:31.830 [2024-11-20 18:40:50.201631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:31.830 [2024-11-20 18:40:50.201669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:31:31.830 [2024-11-20 18:40:50.201687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:31:31.830 [2024-11-20 18:40:50.201702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:31.830 [2024-11-20 18:40:50.201806] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:31:31.830 [2024-11-20 18:40:50.201827] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:31:31.830 [2024-11-20 18:40:50.201883] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:31.830 [2024-11-20 18:40:50.201902] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:31.830 [2024-11-20 18:40:50.201918] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:31:31.830 [2024-11-20 18:40:50.201932] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:31:31.830 [2024-11-20 18:40:50.201946] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:31:31.830 [2024-11-20 18:40:50.201980] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:31:31.830 [2024-11-20 18:40:50.201996] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:31:31.830 [2024-11-20 18:40:50.202010] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:31.830 [2024-11-20 18:40:50.202087] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:31:31.830 [2024-11-20 18:40:50.202114] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:31:31.830 [2024-11-20 18:40:50.202129] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:31.830 [2024-11-20 18:40:50.202165] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:31:31.830 [2024-11-20 18:40:50.202182] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:31:31.830 [2024-11-20 18:40:50.202198] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:31.830 [2024-11-20 18:40:50.202212] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:31:31.830 [2024-11-20 18:40:50.202228] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:31:31.830 [2024-11-20 18:40:50.202241] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:31.830 [2024-11-20 18:40:50.202277] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:31:31.830 [2024-11-20 18:40:50.202293] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:31:31.830 [2024-11-20 18:40:50.202308] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:31.830 [2024-11-20 18:40:50.202322] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:31:31.830 [2024-11-20 18:40:50.202353] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:31:31.830 [2024-11-20 18:40:50.202368] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:31.830 [2024-11-20 18:40:50.202382] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:31:31.830 [2024-11-20 18:40:50.202424] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:31:31.830 [2024-11-20 18:40:50.202442] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:31.830 [2024-11-20 18:40:50.202457] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:31:31.830 [2024-11-20 18:40:50.202473] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:31:31.830 [2024-11-20 18:40:50.202488] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:31.830 [2024-11-20 18:40:50.202502] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:31:31.830 [2024-11-20 18:40:50.202543] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:31:31.830 [2024-11-20 18:40:50.202560] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:31.830 [2024-11-20 18:40:50.202574] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:31:31.830 [2024-11-20 18:40:50.202588] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:31:31.830 [2024-11-20 18:40:50.202603] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:31.830 [2024-11-20 18:40:50.202616] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:31:31.830 [2024-11-20 18:40:50.202649] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:31:31.830 [2024-11-20 18:40:50.202666] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:31.830 [2024-11-20 18:40:50.202680] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:31:31.830 [2024-11-20 18:40:50.202757] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:31:31.830 [2024-11-20 18:40:50.202774] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:31.830 [2024-11-20 18:40:50.202789] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:31:31.830 [2024-11-20 18:40:50.202804] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:31:31.830 [2024-11-20 18:40:50.202837] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:31.830 [2024-11-20 18:40:50.202854] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:31.830 [2024-11-20 18:40:50.202869] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:31:31.830 [2024-11-20 18:40:50.202884] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:31:31.830 [2024-11-20 18:40:50.202916] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:31:31.830 [2024-11-20 18:40:50.202933] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:31:31.830 [2024-11-20 18:40:50.202948] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:31:31.830 [2024-11-20 18:40:50.202962] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:31:31.830 [2024-11-20 18:40:50.202978] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:31:31.830 [2024-11-20 18:40:50.203028] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:31.830 [2024-11-20 18:40:50.203052] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:31:31.830 [2024-11-20 18:40:50.203074] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:31:31.830 [2024-11-20 18:40:50.203117] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:31:31.830 [2024-11-20 18:40:50.203163] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:31:31.830 [2024-11-20 18:40:50.203211] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:31:31.830 [2024-11-20 18:40:50.203243] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:31:31.830 [2024-11-20 18:40:50.203342] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:31:31.830 [2024-11-20 18:40:50.203369] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:31:31.830 [2024-11-20 18:40:50.203391] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:31:31.830 [2024-11-20 18:40:50.203414] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:31:31.830 [2024-11-20 18:40:50.203465] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:31:31.830 [2024-11-20 18:40:50.203490] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:31:31.830 [2024-11-20 18:40:50.203669] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:31:31.830 [2024-11-20 18:40:50.203741] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:31:31.830 [2024-11-20 18:40:50.203765] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:31:31.830 [2024-11-20 18:40:50.203788] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:31.830 [2024-11-20 18:40:50.203844] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:31.830 [2024-11-20 18:40:50.204070] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:31:31.830 [2024-11-20 18:40:50.204104] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:31:31.830 [2024-11-20 18:40:50.204148] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:31:31.830 [2024-11-20 18:40:50.204172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:31.830 [2024-11-20 18:40:50.204191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:31:31.830 [2024-11-20 18:40:50.204206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.419 ms 00:31:31.830 [2024-11-20 18:40:50.204222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:31.830 [2024-11-20 18:40:50.225597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:31.830 [2024-11-20 18:40:50.225693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:31:31.830 [2024-11-20 18:40:50.225733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.320 ms 00:31:31.830 [2024-11-20 18:40:50.225743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:31.830 [2024-11-20 18:40:50.225774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:31.831 [2024-11-20 18:40:50.225781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:31:31.831 [2024-11-20 18:40:50.225788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:31:31.831 [2024-11-20 18:40:50.225794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:31.831 [2024-11-20 18:40:50.252365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:31.831 [2024-11-20 18:40:50.252466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:31:31.831 [2024-11-20 18:40:50.252478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.525 ms 00:31:31.831 [2024-11-20 18:40:50.252485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:31.831 [2024-11-20 18:40:50.252511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:31.831 [2024-11-20 18:40:50.252518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:31:31.831 [2024-11-20 18:40:50.252525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:31:31.831 [2024-11-20 18:40:50.252531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:31.831 [2024-11-20 18:40:50.252614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:31.831 [2024-11-20 18:40:50.252623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:31:31.831 [2024-11-20 18:40:50.252630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:31:31.831 [2024-11-20 18:40:50.252636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:31.831 [2024-11-20 18:40:50.252670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:31.831 [2024-11-20 18:40:50.252678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:31:31.831 [2024-11-20 18:40:50.252684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:31:31.831 [2024-11-20 18:40:50.252690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:31.831 [2024-11-20 18:40:50.266010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:31.831 [2024-11-20 18:40:50.266037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:31:31.831 [2024-11-20 18:40:50.266045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.303 ms 00:31:31.831 [2024-11-20 18:40:50.266051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:31.831 [2024-11-20 18:40:50.266145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:31.831 [2024-11-20 18:40:50.266155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:31:31.831 [2024-11-20 18:40:50.266162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:31.831 [2024-11-20 18:40:50.266168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:31.831 [2024-11-20 18:40:50.292372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:31.831 [2024-11-20 18:40:50.292426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:31:31.831 [2024-11-20 18:40:50.292440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.186 ms 00:31:31.831 [2024-11-20 18:40:50.292449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:31.831 [2024-11-20 18:40:50.301718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:31.831 [2024-11-20 18:40:50.301745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:31:31.831 [2024-11-20 18:40:50.301759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.403 ms 00:31:31.831 [2024-11-20 18:40:50.301765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:31.831 [2024-11-20 18:40:50.349391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:31.831 [2024-11-20 18:40:50.349425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:31:31.831 [2024-11-20 18:40:50.349440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 47.582 ms 00:31:31.831 [2024-11-20 18:40:50.349448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:31.831 [2024-11-20 18:40:50.349574] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:31:31.831 [2024-11-20 18:40:50.349674] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:31:31.831 [2024-11-20 18:40:50.349775] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:31:31.831 [2024-11-20 18:40:50.349868] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:31:31.831 [2024-11-20 18:40:50.349876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:31.831 [2024-11-20 18:40:50.349883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:31:31.831 [2024-11-20 18:40:50.349891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.393 ms 00:31:31.831 [2024-11-20 18:40:50.349897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:31.831 [2024-11-20 18:40:50.349940] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:31:31.831 [2024-11-20 18:40:50.349949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:31.831 [2024-11-20 18:40:50.349959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:31:31.831 [2024-11-20 18:40:50.349967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:31:31.831 [2024-11-20 18:40:50.349973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:31.831 [2024-11-20 18:40:50.362557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:31.831 [2024-11-20 18:40:50.362587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:31:31.831 [2024-11-20 18:40:50.362595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.568 ms 00:31:31.831 [2024-11-20 18:40:50.362603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:31.831 [2024-11-20 18:40:50.369029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:31.831 [2024-11-20 18:40:50.369054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:31:31.831 [2024-11-20 18:40:50.369063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:31:31.831 [2024-11-20 18:40:50.369069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:31.831 [2024-11-20 18:40:50.369165] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:31:31.831 [2024-11-20 18:40:50.369321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:31.831 [2024-11-20 18:40:50.369333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:31:31.831 [2024-11-20 18:40:50.369342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.157 ms 00:31:31.831 [2024-11-20 18:40:50.369348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:32.404 [2024-11-20 18:40:50.899888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:32.404 [2024-11-20 18:40:50.899920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:31:32.404 [2024-11-20 18:40:50.899930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 529.872 ms 00:31:32.404 [2024-11-20 18:40:50.899937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:32.404 [2024-11-20 18:40:50.903446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:32.404 [2024-11-20 18:40:50.903473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:31:32.404 [2024-11-20 18:40:50.903481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.353 ms 00:31:32.404 [2024-11-20 18:40:50.903488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:32.404 [2024-11-20 18:40:50.904165] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:31:32.404 [2024-11-20 18:40:50.904188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:32.404 [2024-11-20 18:40:50.904195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:31:32.404 [2024-11-20 18:40:50.904203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.681 ms 00:31:32.404 [2024-11-20 18:40:50.904208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:32.404 [2024-11-20 18:40:50.904272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:32.404 [2024-11-20 18:40:50.904280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:31:32.404 [2024-11-20 18:40:50.904287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:31:32.404 [2024-11-20 18:40:50.904293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:32.404 [2024-11-20 18:40:50.904324] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 535.158 ms, result 0 00:31:32.404 [2024-11-20 18:40:50.904352] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:31:32.404 [2024-11-20 18:40:50.904500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:32.404 [2024-11-20 18:40:50.904666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:31:32.404 [2024-11-20 18:40:50.904679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.148 ms 00:31:32.404 [2024-11-20 18:40:50.904685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:33.035 [2024-11-20 18:40:51.474126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:33.035 [2024-11-20 18:40:51.474156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:31:33.035 [2024-11-20 18:40:51.474164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 568.757 ms 00:31:33.035 [2024-11-20 18:40:51.474170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:33.035 [2024-11-20 18:40:51.477692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:33.035 [2024-11-20 18:40:51.477717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:31:33.035 [2024-11-20 18:40:51.477724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.354 ms 00:31:33.035 [2024-11-20 18:40:51.477731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:33.035 [2024-11-20 18:40:51.478588] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:31:33.035 [2024-11-20 18:40:51.478605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:33.035 [2024-11-20 18:40:51.478611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:31:33.035 [2024-11-20 18:40:51.478617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.854 ms 00:31:33.035 [2024-11-20 18:40:51.478623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:33.035 [2024-11-20 18:40:51.478647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:33.035 [2024-11-20 18:40:51.478653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:31:33.035 [2024-11-20 18:40:51.478659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:33.035 [2024-11-20 18:40:51.478665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:33.035 [2024-11-20 18:40:51.478691] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 574.334 ms, result 0 00:31:33.035 [2024-11-20 18:40:51.478721] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:31:33.035 [2024-11-20 18:40:51.478729] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:31:33.035 [2024-11-20 18:40:51.478737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:33.035 [2024-11-20 18:40:51.478743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:31:33.035 [2024-11-20 18:40:51.478750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1109.585 ms 00:31:33.035 [2024-11-20 18:40:51.478756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:33.035 [2024-11-20 18:40:51.478778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:33.035 [2024-11-20 18:40:51.478785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:31:33.035 [2024-11-20 18:40:51.478794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:31:33.035 [2024-11-20 18:40:51.478799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:33.035 [2024-11-20 18:40:51.486604] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:31:33.035 [2024-11-20 18:40:51.486766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:33.035 [2024-11-20 18:40:51.486796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:31:33.035 [2024-11-20 18:40:51.486849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.954 ms 00:31:33.035 [2024-11-20 18:40:51.486867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:33.035 [2024-11-20 18:40:51.487419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:33.035 [2024-11-20 18:40:51.487493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:31:33.035 [2024-11-20 18:40:51.487547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.488 ms 00:31:33.035 [2024-11-20 18:40:51.487566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:33.035 [2024-11-20 18:40:51.489255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:33.035 [2024-11-20 18:40:51.489327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:31:33.035 [2024-11-20 18:40:51.489375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.664 ms 00:31:33.035 [2024-11-20 18:40:51.489394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:33.035 [2024-11-20 18:40:51.489434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:33.035 [2024-11-20 18:40:51.489453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:31:33.035 [2024-11-20 18:40:51.489469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:33.035 [2024-11-20 18:40:51.489488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:33.035 [2024-11-20 18:40:51.489579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:33.035 [2024-11-20 18:40:51.489719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:31:33.035 [2024-11-20 18:40:51.489739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:31:33.035 [2024-11-20 18:40:51.489755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:33.035 [2024-11-20 18:40:51.489783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:33.035 [2024-11-20 18:40:51.489800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:31:33.035 [2024-11-20 18:40:51.489816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:31:33.035 [2024-11-20 18:40:51.489831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:33.035 [2024-11-20 18:40:51.489867] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:31:33.035 [2024-11-20 18:40:51.489933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:33.035 [2024-11-20 18:40:51.489951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:31:33.035 [2024-11-20 18:40:51.489967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.066 ms 00:31:33.035 [2024-11-20 18:40:51.489983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:33.035 [2024-11-20 18:40:51.490035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:33.035 [2024-11-20 18:40:51.490057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:31:33.035 [2024-11-20 18:40:51.490159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:31:33.035 [2024-11-20 18:40:51.490177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:33.035 [2024-11-20 18:40:51.491112] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1315.927 ms, result 0 00:31:33.035 [2024-11-20 18:40:51.503476] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:33.035 [2024-11-20 18:40:51.519476] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:31:33.035 [2024-11-20 18:40:51.527611] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:33.308 18:40:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:33.308 18:40:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:31:33.308 Validate MD5 checksum, iteration 1 00:31:33.308 18:40:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:33.308 18:40:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:31:33.308 18:40:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:31:33.308 18:40:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:31:33.308 18:40:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:31:33.308 18:40:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:33.308 18:40:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:31:33.308 18:40:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:33.308 18:40:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:33.308 18:40:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:33.308 18:40:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:33.308 18:40:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:33.308 18:40:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:33.308 [2024-11-20 18:40:51.723651] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:31:33.308 [2024-11-20 18:40:51.723895] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83857 ] 00:31:33.308 [2024-11-20 18:40:51.880636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:33.579 [2024-11-20 18:40:51.973856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:34.966  [2024-11-20T18:40:54.167Z] Copying: 664/1024 [MB] (664 MBps) [2024-11-20T18:40:58.377Z] Copying: 1024/1024 [MB] (average 679 MBps) 00:31:39.748 00:31:39.748 18:40:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:31:39.748 18:40:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:41.130 18:40:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:31:41.130 Validate MD5 checksum, iteration 2 00:31:41.130 18:40:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=38c3e805980537d77425176bd97a7bb1 00:31:41.130 18:40:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 38c3e805980537d77425176bd97a7bb1 != \3\8\c\3\e\8\0\5\9\8\0\5\3\7\d\7\7\4\2\5\1\7\6\b\d\9\7\a\7\b\b\1 ]] 00:31:41.130 18:40:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:31:41.130 18:40:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:41.130 18:40:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:31:41.130 18:40:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:41.130 18:40:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:41.130 18:40:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:41.130 18:40:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:41.130 18:40:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:41.130 18:40:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:41.130 [2024-11-20 18:40:59.631113] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:31:41.130 [2024-11-20 18:40:59.631312] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83942 ] 00:31:41.392 [2024-11-20 18:40:59.784712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:41.392 [2024-11-20 18:40:59.880039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:42.778  [2024-11-20T18:41:02.349Z] Copying: 565/1024 [MB] (565 MBps) [2024-11-20T18:41:04.262Z] Copying: 1024/1024 [MB] (average 575 MBps) 00:31:45.633 00:31:45.633 18:41:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:31:45.633 18:41:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:48.180 18:41:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:31:48.180 18:41:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=b75d60474968c27beae4bf5078bf7895 00:31:48.180 18:41:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ b75d60474968c27beae4bf5078bf7895 != \b\7\5\d\6\0\4\7\4\9\6\8\c\2\7\b\e\a\e\4\b\f\5\0\7\8\b\f\7\8\9\5 ]] 00:31:48.180 18:41:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:31:48.180 18:41:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:48.180 18:41:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:31:48.180 18:41:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:31:48.180 18:41:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:31:48.180 18:41:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:48.180 18:41:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:31:48.181 18:41:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:31:48.181 18:41:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:31:48.181 18:41:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:31:48.181 18:41:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 83828 ]] 00:31:48.181 18:41:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 83828 00:31:48.181 18:41:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83828 ']' 00:31:48.181 18:41:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83828 00:31:48.181 18:41:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:31:48.181 18:41:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:48.181 18:41:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83828 00:31:48.181 killing process with pid 83828 00:31:48.181 18:41:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:48.181 18:41:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:48.181 18:41:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83828' 00:31:48.181 18:41:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83828 00:31:48.181 18:41:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83828 00:31:48.442 [2024-11-20 18:41:07.062223] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:31:48.704 [2024-11-20 18:41:07.074439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:48.704 [2024-11-20 18:41:07.074475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:31:48.704 [2024-11-20 18:41:07.074486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:31:48.704 [2024-11-20 18:41:07.074493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:48.704 [2024-11-20 18:41:07.074512] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:31:48.704 [2024-11-20 18:41:07.076646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:48.704 [2024-11-20 18:41:07.076672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:31:48.704 [2024-11-20 18:41:07.076682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.123 ms 00:31:48.704 [2024-11-20 18:41:07.076693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:48.704 [2024-11-20 18:41:07.076883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:48.704 [2024-11-20 18:41:07.076894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:31:48.704 [2024-11-20 18:41:07.076901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.171 ms 00:31:48.704 [2024-11-20 18:41:07.076907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:48.704 [2024-11-20 18:41:07.078215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:48.704 [2024-11-20 18:41:07.078238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:31:48.704 [2024-11-20 18:41:07.078263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.295 ms 00:31:48.704 [2024-11-20 18:41:07.078270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:48.704 [2024-11-20 18:41:07.079146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:48.704 [2024-11-20 18:41:07.079166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:31:48.704 [2024-11-20 18:41:07.079174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.847 ms 00:31:48.704 [2024-11-20 18:41:07.079181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:48.704 [2024-11-20 18:41:07.087050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:48.704 [2024-11-20 18:41:07.087078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:31:48.704 [2024-11-20 18:41:07.087087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.842 ms 00:31:48.704 [2024-11-20 18:41:07.087109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:48.704 [2024-11-20 18:41:07.091351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:48.704 [2024-11-20 18:41:07.091376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:31:48.704 [2024-11-20 18:41:07.091385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.211 ms 00:31:48.704 [2024-11-20 18:41:07.091393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:48.704 [2024-11-20 18:41:07.091466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:48.704 [2024-11-20 18:41:07.091474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:31:48.704 [2024-11-20 18:41:07.091482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.044 ms 00:31:48.704 [2024-11-20 18:41:07.091488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:48.704 [2024-11-20 18:41:07.099515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:48.704 [2024-11-20 18:41:07.099540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:31:48.704 [2024-11-20 18:41:07.099548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.010 ms 00:31:48.704 [2024-11-20 18:41:07.099554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:48.704 [2024-11-20 18:41:07.107427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:48.704 [2024-11-20 18:41:07.107451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:31:48.704 [2024-11-20 18:41:07.107458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.846 ms 00:31:48.704 [2024-11-20 18:41:07.107464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:48.704 [2024-11-20 18:41:07.115185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:48.704 [2024-11-20 18:41:07.115209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:31:48.704 [2024-11-20 18:41:07.115216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.693 ms 00:31:48.704 [2024-11-20 18:41:07.115222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:48.704 [2024-11-20 18:41:07.122819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:48.704 [2024-11-20 18:41:07.122844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:31:48.704 [2024-11-20 18:41:07.122851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.550 ms 00:31:48.704 [2024-11-20 18:41:07.122857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:48.704 [2024-11-20 18:41:07.122882] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:31:48.704 [2024-11-20 18:41:07.122893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:31:48.704 [2024-11-20 18:41:07.122901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:31:48.704 [2024-11-20 18:41:07.122907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:31:48.704 [2024-11-20 18:41:07.122915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:48.704 [2024-11-20 18:41:07.122921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:48.704 [2024-11-20 18:41:07.122927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:48.704 [2024-11-20 18:41:07.122933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:48.705 [2024-11-20 18:41:07.122939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:48.705 [2024-11-20 18:41:07.122944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:48.705 [2024-11-20 18:41:07.122950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:48.705 [2024-11-20 18:41:07.122956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:48.705 [2024-11-20 18:41:07.122961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:48.705 [2024-11-20 18:41:07.122967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:48.705 [2024-11-20 18:41:07.122973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:48.705 [2024-11-20 18:41:07.122979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:48.705 [2024-11-20 18:41:07.122984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:48.705 [2024-11-20 18:41:07.122990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:48.705 [2024-11-20 18:41:07.122996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:48.705 [2024-11-20 18:41:07.123004] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:31:48.705 [2024-11-20 18:41:07.123010] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: fc067ecb-7c25-46bc-a976-5179fb694498 00:31:48.705 [2024-11-20 18:41:07.123016] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:31:48.705 [2024-11-20 18:41:07.123022] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:31:48.705 [2024-11-20 18:41:07.123028] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:31:48.705 [2024-11-20 18:41:07.123034] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:31:48.705 [2024-11-20 18:41:07.123040] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:31:48.705 [2024-11-20 18:41:07.123045] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:31:48.705 [2024-11-20 18:41:07.123051] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:31:48.705 [2024-11-20 18:41:07.123057] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:31:48.705 [2024-11-20 18:41:07.123063] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:31:48.705 [2024-11-20 18:41:07.123071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:48.705 [2024-11-20 18:41:07.123081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:31:48.705 [2024-11-20 18:41:07.123088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.190 ms 00:31:48.705 [2024-11-20 18:41:07.123109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:48.705 [2024-11-20 18:41:07.133401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:48.705 [2024-11-20 18:41:07.133530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:31:48.705 [2024-11-20 18:41:07.133544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.260 ms 00:31:48.705 [2024-11-20 18:41:07.133551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:48.705 [2024-11-20 18:41:07.133845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:48.705 [2024-11-20 18:41:07.133853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:31:48.705 [2024-11-20 18:41:07.133860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.278 ms 00:31:48.705 [2024-11-20 18:41:07.133866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:48.705 [2024-11-20 18:41:07.168885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:48.705 [2024-11-20 18:41:07.169010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:31:48.705 [2024-11-20 18:41:07.169022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:48.705 [2024-11-20 18:41:07.169030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:48.705 [2024-11-20 18:41:07.169059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:48.705 [2024-11-20 18:41:07.169066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:31:48.705 [2024-11-20 18:41:07.169073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:48.705 [2024-11-20 18:41:07.169079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:48.705 [2024-11-20 18:41:07.169160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:48.705 [2024-11-20 18:41:07.169169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:31:48.705 [2024-11-20 18:41:07.169176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:48.705 [2024-11-20 18:41:07.169183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:48.705 [2024-11-20 18:41:07.169197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:48.705 [2024-11-20 18:41:07.169207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:31:48.705 [2024-11-20 18:41:07.169214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:48.705 [2024-11-20 18:41:07.169220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:48.705 [2024-11-20 18:41:07.231786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:48.705 [2024-11-20 18:41:07.231926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:31:48.705 [2024-11-20 18:41:07.231941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:48.705 [2024-11-20 18:41:07.231948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:48.705 [2024-11-20 18:41:07.283322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:48.705 [2024-11-20 18:41:07.283358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:31:48.705 [2024-11-20 18:41:07.283367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:48.705 [2024-11-20 18:41:07.283374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:48.705 [2024-11-20 18:41:07.283435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:48.705 [2024-11-20 18:41:07.283444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:48.705 [2024-11-20 18:41:07.283451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:48.705 [2024-11-20 18:41:07.283457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:48.705 [2024-11-20 18:41:07.283507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:48.705 [2024-11-20 18:41:07.283516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:48.705 [2024-11-20 18:41:07.283527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:48.705 [2024-11-20 18:41:07.283540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:48.705 [2024-11-20 18:41:07.283619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:48.705 [2024-11-20 18:41:07.283627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:48.705 [2024-11-20 18:41:07.283634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:48.705 [2024-11-20 18:41:07.283640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:48.705 [2024-11-20 18:41:07.283668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:48.705 [2024-11-20 18:41:07.283675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:31:48.705 [2024-11-20 18:41:07.283683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:48.705 [2024-11-20 18:41:07.283691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:48.705 [2024-11-20 18:41:07.283726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:48.705 [2024-11-20 18:41:07.283732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:48.705 [2024-11-20 18:41:07.283739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:48.705 [2024-11-20 18:41:07.283745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:48.705 [2024-11-20 18:41:07.283783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:48.705 [2024-11-20 18:41:07.283792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:48.705 [2024-11-20 18:41:07.283800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:48.705 [2024-11-20 18:41:07.283806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:48.705 [2024-11-20 18:41:07.283912] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 209.443 ms, result 0 00:31:49.646 18:41:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:31:49.646 18:41:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:49.646 18:41:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:31:49.646 18:41:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:31:49.646 18:41:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:31:49.646 18:41:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:49.646 Remove shared memory files 00:31:49.646 18:41:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:31:49.646 18:41:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:31:49.646 18:41:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:31:49.646 18:41:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:31:49.646 18:41:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid83623 00:31:49.646 18:41:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:31:49.646 18:41:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:31:49.646 ************************************ 00:31:49.646 END TEST ftl_upgrade_shutdown 00:31:49.646 ************************************ 00:31:49.646 00:31:49.646 real 1m22.637s 00:31:49.646 user 1m53.980s 00:31:49.646 sys 0m18.804s 00:31:49.646 18:41:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:49.646 18:41:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:49.646 18:41:08 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:31:49.646 18:41:08 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:31:49.646 18:41:08 ftl -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:31:49.646 18:41:08 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:49.646 18:41:08 ftl -- common/autotest_common.sh@10 -- # set +x 00:31:49.646 ************************************ 00:31:49.646 START TEST ftl_restore_fast 00:31:49.646 ************************************ 00:31:49.646 18:41:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:31:49.646 * Looking for test storage... 00:31:49.646 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:31:49.646 18:41:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:49.646 18:41:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # lcov --version 00:31:49.646 18:41:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:49.646 18:41:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:49.646 18:41:08 ftl.ftl_restore_fast -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:49.646 18:41:08 ftl.ftl_restore_fast -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:49.646 18:41:08 ftl.ftl_restore_fast -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:49.646 18:41:08 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # IFS=.-: 00:31:49.646 18:41:08 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # read -ra ver1 00:31:49.646 18:41:08 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # IFS=.-: 00:31:49.646 18:41:08 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # read -ra ver2 00:31:49.646 18:41:08 ftl.ftl_restore_fast -- scripts/common.sh@338 -- # local 'op=<' 00:31:49.646 18:41:08 ftl.ftl_restore_fast -- scripts/common.sh@340 -- # ver1_l=2 00:31:49.646 18:41:08 ftl.ftl_restore_fast -- scripts/common.sh@341 -- # ver2_l=1 00:31:49.646 18:41:08 ftl.ftl_restore_fast -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:49.646 18:41:08 ftl.ftl_restore_fast -- scripts/common.sh@344 -- # case "$op" in 00:31:49.646 18:41:08 ftl.ftl_restore_fast -- scripts/common.sh@345 -- # : 1 00:31:49.646 18:41:08 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:49.646 18:41:08 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:49.646 18:41:08 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # decimal 1 00:31:49.646 18:41:08 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=1 00:31:49.646 18:41:08 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:49.646 18:41:08 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 1 00:31:49.646 18:41:08 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # ver1[v]=1 00:31:49.646 18:41:08 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # decimal 2 00:31:49.646 18:41:08 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=2 00:31:49.646 18:41:08 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:49.646 18:41:08 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 2 00:31:49.646 18:41:08 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # ver2[v]=2 00:31:49.646 18:41:08 ftl.ftl_restore_fast -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:49.646 18:41:08 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:49.646 18:41:08 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # return 0 00:31:49.646 18:41:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:49.646 18:41:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:49.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:49.646 --rc genhtml_branch_coverage=1 00:31:49.646 --rc genhtml_function_coverage=1 00:31:49.646 --rc genhtml_legend=1 00:31:49.646 --rc geninfo_all_blocks=1 00:31:49.646 --rc geninfo_unexecuted_blocks=1 00:31:49.646 00:31:49.646 ' 00:31:49.646 18:41:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:49.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:49.646 --rc genhtml_branch_coverage=1 00:31:49.646 --rc genhtml_function_coverage=1 00:31:49.646 --rc genhtml_legend=1 00:31:49.646 --rc geninfo_all_blocks=1 00:31:49.646 --rc geninfo_unexecuted_blocks=1 00:31:49.646 00:31:49.646 ' 00:31:49.646 18:41:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:49.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:49.646 --rc genhtml_branch_coverage=1 00:31:49.646 --rc genhtml_function_coverage=1 00:31:49.646 --rc genhtml_legend=1 00:31:49.646 --rc geninfo_all_blocks=1 00:31:49.646 --rc geninfo_unexecuted_blocks=1 00:31:49.646 00:31:49.646 ' 00:31:49.646 18:41:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:49.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:49.646 --rc genhtml_branch_coverage=1 00:31:49.646 --rc genhtml_function_coverage=1 00:31:49.646 --rc genhtml_legend=1 00:31:49.646 --rc geninfo_all_blocks=1 00:31:49.646 --rc geninfo_unexecuted_blocks=1 00:31:49.646 00:31:49.646 ' 00:31:49.646 18:41:08 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:31:49.646 18:41:08 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:31:49.646 18:41:08 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:31:49.646 18:41:08 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:31:49.646 18:41:08 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:31:49.646 18:41:08 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:31:49.646 18:41:08 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:49.646 18:41:08 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:31:49.647 18:41:08 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:31:49.647 18:41:08 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:49.647 18:41:08 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:49.647 18:41:08 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:31:49.647 18:41:08 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:31:49.647 18:41:08 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:49.647 18:41:08 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:49.647 18:41:08 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:31:49.647 18:41:08 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:31:49.647 18:41:08 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:49.647 18:41:08 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:49.647 18:41:08 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:31:49.647 18:41:08 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:31:49.647 18:41:08 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:31:49.647 18:41:08 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:31:49.647 18:41:08 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:49.647 18:41:08 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:49.647 18:41:08 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:31:49.647 18:41:08 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:31:49.647 18:41:08 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:49.647 18:41:08 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:49.647 18:41:08 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:49.647 18:41:08 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:31:49.647 18:41:08 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.NZL2NEpZ6Q 00:31:49.647 18:41:08 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:31:49.647 18:41:08 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:31:49.647 18:41:08 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:31:49.647 18:41:08 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:31:49.647 18:41:08 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:31:49.647 18:41:08 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:31:49.647 18:41:08 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:31:49.647 18:41:08 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:31:49.647 18:41:08 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:31:49.647 18:41:08 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:31:49.647 18:41:08 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:31:49.647 18:41:08 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=84107 00:31:49.647 18:41:08 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 84107 00:31:49.647 18:41:08 ftl.ftl_restore_fast -- common/autotest_common.sh@835 -- # '[' -z 84107 ']' 00:31:49.647 18:41:08 ftl.ftl_restore_fast -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:49.647 18:41:08 ftl.ftl_restore_fast -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:49.647 18:41:08 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:49.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:49.647 18:41:08 ftl.ftl_restore_fast -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:49.647 18:41:08 ftl.ftl_restore_fast -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:49.647 18:41:08 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:31:49.907 [2024-11-20 18:41:08.284517] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:31:49.907 [2024-11-20 18:41:08.284716] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84107 ] 00:31:49.907 [2024-11-20 18:41:08.433414] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:49.907 [2024-11-20 18:41:08.520475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:50.848 18:41:09 ftl.ftl_restore_fast -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:50.848 18:41:09 ftl.ftl_restore_fast -- common/autotest_common.sh@868 -- # return 0 00:31:50.848 18:41:09 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:31:50.848 18:41:09 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:31:50.848 18:41:09 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:31:50.848 18:41:09 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:31:50.848 18:41:09 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:31:50.848 18:41:09 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:31:50.848 18:41:09 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:31:50.848 18:41:09 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:31:50.848 18:41:09 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:31:50.848 18:41:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:31:50.848 18:41:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:50.848 18:41:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:31:50.848 18:41:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:31:50.848 18:41:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:31:51.107 18:41:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:51.107 { 00:31:51.107 "name": "nvme0n1", 00:31:51.107 "aliases": [ 00:31:51.107 "f05ca2ee-478c-49e1-8dc3-e5224cbcfefe" 00:31:51.107 ], 00:31:51.107 "product_name": "NVMe disk", 00:31:51.107 "block_size": 4096, 00:31:51.107 "num_blocks": 1310720, 00:31:51.107 "uuid": "f05ca2ee-478c-49e1-8dc3-e5224cbcfefe", 00:31:51.107 "numa_id": -1, 00:31:51.107 "assigned_rate_limits": { 00:31:51.107 "rw_ios_per_sec": 0, 00:31:51.107 "rw_mbytes_per_sec": 0, 00:31:51.107 "r_mbytes_per_sec": 0, 00:31:51.107 "w_mbytes_per_sec": 0 00:31:51.107 }, 00:31:51.107 "claimed": true, 00:31:51.107 "claim_type": "read_many_write_one", 00:31:51.107 "zoned": false, 00:31:51.107 "supported_io_types": { 00:31:51.107 "read": true, 00:31:51.107 "write": true, 00:31:51.107 "unmap": true, 00:31:51.107 "flush": true, 00:31:51.107 "reset": true, 00:31:51.107 "nvme_admin": true, 00:31:51.107 "nvme_io": true, 00:31:51.107 "nvme_io_md": false, 00:31:51.107 "write_zeroes": true, 00:31:51.107 "zcopy": false, 00:31:51.107 "get_zone_info": false, 00:31:51.107 "zone_management": false, 00:31:51.107 "zone_append": false, 00:31:51.107 "compare": true, 00:31:51.107 "compare_and_write": false, 00:31:51.107 "abort": true, 00:31:51.107 "seek_hole": false, 00:31:51.107 "seek_data": false, 00:31:51.107 "copy": true, 00:31:51.107 "nvme_iov_md": false 00:31:51.107 }, 00:31:51.107 "driver_specific": { 00:31:51.107 "nvme": [ 00:31:51.107 { 00:31:51.107 "pci_address": "0000:00:11.0", 00:31:51.107 "trid": { 00:31:51.107 "trtype": "PCIe", 00:31:51.107 "traddr": "0000:00:11.0" 00:31:51.107 }, 00:31:51.107 "ctrlr_data": { 00:31:51.107 "cntlid": 0, 00:31:51.107 "vendor_id": "0x1b36", 00:31:51.107 "model_number": "QEMU NVMe Ctrl", 00:31:51.107 "serial_number": "12341", 00:31:51.107 "firmware_revision": "8.0.0", 00:31:51.108 "subnqn": "nqn.2019-08.org.qemu:12341", 00:31:51.108 "oacs": { 00:31:51.108 "security": 0, 00:31:51.108 "format": 1, 00:31:51.108 "firmware": 0, 00:31:51.108 "ns_manage": 1 00:31:51.108 }, 00:31:51.108 "multi_ctrlr": false, 00:31:51.108 "ana_reporting": false 00:31:51.108 }, 00:31:51.108 "vs": { 00:31:51.108 "nvme_version": "1.4" 00:31:51.108 }, 00:31:51.108 "ns_data": { 00:31:51.108 "id": 1, 00:31:51.108 "can_share": false 00:31:51.108 } 00:31:51.108 } 00:31:51.108 ], 00:31:51.108 "mp_policy": "active_passive" 00:31:51.108 } 00:31:51.108 } 00:31:51.108 ]' 00:31:51.108 18:41:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:51.108 18:41:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:31:51.108 18:41:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:51.108 18:41:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=1310720 00:31:51.108 18:41:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:31:51.108 18:41:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 5120 00:31:51.108 18:41:09 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:31:51.108 18:41:09 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:31:51.108 18:41:09 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:31:51.108 18:41:09 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:51.108 18:41:09 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:31:51.368 18:41:09 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=2e760d3d-8d3e-477a-9ebe-245f4661448e 00:31:51.368 18:41:09 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:31:51.368 18:41:09 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2e760d3d-8d3e-477a-9ebe-245f4661448e 00:31:51.628 18:41:10 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:31:51.628 18:41:10 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=f60eab26-a097-4227-8a6c-629b7b0aedf7 00:31:51.628 18:41:10 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u f60eab26-a097-4227-8a6c-629b7b0aedf7 00:31:51.890 18:41:10 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=e46f6e59-729f-414e-8125-51b1aad87f31 00:31:51.890 18:41:10 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:31:51.890 18:41:10 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 e46f6e59-729f-414e-8125-51b1aad87f31 00:31:51.890 18:41:10 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:31:51.890 18:41:10 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:31:51.890 18:41:10 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=e46f6e59-729f-414e-8125-51b1aad87f31 00:31:51.890 18:41:10 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:31:51.890 18:41:10 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size e46f6e59-729f-414e-8125-51b1aad87f31 00:31:51.890 18:41:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=e46f6e59-729f-414e-8125-51b1aad87f31 00:31:51.890 18:41:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:51.890 18:41:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:31:51.890 18:41:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:31:51.890 18:41:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e46f6e59-729f-414e-8125-51b1aad87f31 00:31:52.149 18:41:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:52.149 { 00:31:52.149 "name": "e46f6e59-729f-414e-8125-51b1aad87f31", 00:31:52.149 "aliases": [ 00:31:52.149 "lvs/nvme0n1p0" 00:31:52.149 ], 00:31:52.149 "product_name": "Logical Volume", 00:31:52.149 "block_size": 4096, 00:31:52.149 "num_blocks": 26476544, 00:31:52.149 "uuid": "e46f6e59-729f-414e-8125-51b1aad87f31", 00:31:52.149 "assigned_rate_limits": { 00:31:52.149 "rw_ios_per_sec": 0, 00:31:52.149 "rw_mbytes_per_sec": 0, 00:31:52.149 "r_mbytes_per_sec": 0, 00:31:52.149 "w_mbytes_per_sec": 0 00:31:52.149 }, 00:31:52.149 "claimed": false, 00:31:52.149 "zoned": false, 00:31:52.149 "supported_io_types": { 00:31:52.149 "read": true, 00:31:52.149 "write": true, 00:31:52.149 "unmap": true, 00:31:52.149 "flush": false, 00:31:52.149 "reset": true, 00:31:52.149 "nvme_admin": false, 00:31:52.149 "nvme_io": false, 00:31:52.149 "nvme_io_md": false, 00:31:52.149 "write_zeroes": true, 00:31:52.149 "zcopy": false, 00:31:52.149 "get_zone_info": false, 00:31:52.149 "zone_management": false, 00:31:52.149 "zone_append": false, 00:31:52.149 "compare": false, 00:31:52.149 "compare_and_write": false, 00:31:52.149 "abort": false, 00:31:52.149 "seek_hole": true, 00:31:52.149 "seek_data": true, 00:31:52.149 "copy": false, 00:31:52.149 "nvme_iov_md": false 00:31:52.149 }, 00:31:52.149 "driver_specific": { 00:31:52.149 "lvol": { 00:31:52.149 "lvol_store_uuid": "f60eab26-a097-4227-8a6c-629b7b0aedf7", 00:31:52.149 "base_bdev": "nvme0n1", 00:31:52.149 "thin_provision": true, 00:31:52.149 "num_allocated_clusters": 0, 00:31:52.149 "snapshot": false, 00:31:52.149 "clone": false, 00:31:52.149 "esnap_clone": false 00:31:52.149 } 00:31:52.149 } 00:31:52.149 } 00:31:52.149 ]' 00:31:52.149 18:41:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:52.149 18:41:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:31:52.149 18:41:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:52.149 18:41:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:31:52.149 18:41:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:31:52.149 18:41:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:31:52.149 18:41:10 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:31:52.149 18:41:10 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:31:52.149 18:41:10 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:31:52.409 18:41:10 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:31:52.410 18:41:10 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:31:52.410 18:41:10 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size e46f6e59-729f-414e-8125-51b1aad87f31 00:31:52.410 18:41:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=e46f6e59-729f-414e-8125-51b1aad87f31 00:31:52.410 18:41:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:52.410 18:41:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:31:52.410 18:41:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:31:52.410 18:41:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e46f6e59-729f-414e-8125-51b1aad87f31 00:31:52.669 18:41:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:52.669 { 00:31:52.669 "name": "e46f6e59-729f-414e-8125-51b1aad87f31", 00:31:52.669 "aliases": [ 00:31:52.669 "lvs/nvme0n1p0" 00:31:52.669 ], 00:31:52.669 "product_name": "Logical Volume", 00:31:52.669 "block_size": 4096, 00:31:52.669 "num_blocks": 26476544, 00:31:52.669 "uuid": "e46f6e59-729f-414e-8125-51b1aad87f31", 00:31:52.669 "assigned_rate_limits": { 00:31:52.669 "rw_ios_per_sec": 0, 00:31:52.669 "rw_mbytes_per_sec": 0, 00:31:52.669 "r_mbytes_per_sec": 0, 00:31:52.669 "w_mbytes_per_sec": 0 00:31:52.669 }, 00:31:52.669 "claimed": false, 00:31:52.669 "zoned": false, 00:31:52.669 "supported_io_types": { 00:31:52.669 "read": true, 00:31:52.669 "write": true, 00:31:52.669 "unmap": true, 00:31:52.669 "flush": false, 00:31:52.669 "reset": true, 00:31:52.669 "nvme_admin": false, 00:31:52.669 "nvme_io": false, 00:31:52.669 "nvme_io_md": false, 00:31:52.669 "write_zeroes": true, 00:31:52.669 "zcopy": false, 00:31:52.669 "get_zone_info": false, 00:31:52.669 "zone_management": false, 00:31:52.669 "zone_append": false, 00:31:52.669 "compare": false, 00:31:52.669 "compare_and_write": false, 00:31:52.669 "abort": false, 00:31:52.669 "seek_hole": true, 00:31:52.669 "seek_data": true, 00:31:52.669 "copy": false, 00:31:52.669 "nvme_iov_md": false 00:31:52.669 }, 00:31:52.669 "driver_specific": { 00:31:52.669 "lvol": { 00:31:52.669 "lvol_store_uuid": "f60eab26-a097-4227-8a6c-629b7b0aedf7", 00:31:52.669 "base_bdev": "nvme0n1", 00:31:52.669 "thin_provision": true, 00:31:52.669 "num_allocated_clusters": 0, 00:31:52.669 "snapshot": false, 00:31:52.669 "clone": false, 00:31:52.669 "esnap_clone": false 00:31:52.669 } 00:31:52.669 } 00:31:52.669 } 00:31:52.669 ]' 00:31:52.669 18:41:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:52.669 18:41:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:31:52.669 18:41:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:52.669 18:41:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:31:52.669 18:41:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:31:52.669 18:41:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:31:52.669 18:41:11 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:31:52.669 18:41:11 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:31:52.930 18:41:11 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:31:52.931 18:41:11 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size e46f6e59-729f-414e-8125-51b1aad87f31 00:31:52.931 18:41:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=e46f6e59-729f-414e-8125-51b1aad87f31 00:31:52.931 18:41:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:52.931 18:41:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:31:52.931 18:41:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:31:52.931 18:41:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e46f6e59-729f-414e-8125-51b1aad87f31 00:31:52.931 18:41:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:52.931 { 00:31:52.931 "name": "e46f6e59-729f-414e-8125-51b1aad87f31", 00:31:52.931 "aliases": [ 00:31:52.931 "lvs/nvme0n1p0" 00:31:52.931 ], 00:31:52.931 "product_name": "Logical Volume", 00:31:52.931 "block_size": 4096, 00:31:52.931 "num_blocks": 26476544, 00:31:52.931 "uuid": "e46f6e59-729f-414e-8125-51b1aad87f31", 00:31:52.931 "assigned_rate_limits": { 00:31:52.931 "rw_ios_per_sec": 0, 00:31:52.931 "rw_mbytes_per_sec": 0, 00:31:52.931 "r_mbytes_per_sec": 0, 00:31:52.931 "w_mbytes_per_sec": 0 00:31:52.931 }, 00:31:52.931 "claimed": false, 00:31:52.931 "zoned": false, 00:31:52.931 "supported_io_types": { 00:31:52.931 "read": true, 00:31:52.931 "write": true, 00:31:52.931 "unmap": true, 00:31:52.931 "flush": false, 00:31:52.931 "reset": true, 00:31:52.931 "nvme_admin": false, 00:31:52.931 "nvme_io": false, 00:31:52.931 "nvme_io_md": false, 00:31:52.931 "write_zeroes": true, 00:31:52.931 "zcopy": false, 00:31:52.931 "get_zone_info": false, 00:31:52.931 "zone_management": false, 00:31:52.931 "zone_append": false, 00:31:52.931 "compare": false, 00:31:52.931 "compare_and_write": false, 00:31:52.931 "abort": false, 00:31:52.931 "seek_hole": true, 00:31:52.931 "seek_data": true, 00:31:52.931 "copy": false, 00:31:52.931 "nvme_iov_md": false 00:31:52.931 }, 00:31:52.931 "driver_specific": { 00:31:52.931 "lvol": { 00:31:52.931 "lvol_store_uuid": "f60eab26-a097-4227-8a6c-629b7b0aedf7", 00:31:52.931 "base_bdev": "nvme0n1", 00:31:52.931 "thin_provision": true, 00:31:52.931 "num_allocated_clusters": 0, 00:31:52.931 "snapshot": false, 00:31:52.931 "clone": false, 00:31:52.931 "esnap_clone": false 00:31:52.931 } 00:31:52.931 } 00:31:52.931 } 00:31:52.931 ]' 00:31:52.931 18:41:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:52.931 18:41:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:31:52.931 18:41:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:53.193 18:41:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:31:53.193 18:41:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:31:53.193 18:41:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:31:53.193 18:41:11 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:31:53.193 18:41:11 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d e46f6e59-729f-414e-8125-51b1aad87f31 --l2p_dram_limit 10' 00:31:53.193 18:41:11 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:31:53.193 18:41:11 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:31:53.193 18:41:11 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:31:53.193 18:41:11 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:31:53.193 18:41:11 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:31:53.193 18:41:11 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d e46f6e59-729f-414e-8125-51b1aad87f31 --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:31:53.193 [2024-11-20 18:41:11.757455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.193 [2024-11-20 18:41:11.757612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:53.193 [2024-11-20 18:41:11.757634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:31:53.193 [2024-11-20 18:41:11.757642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.193 [2024-11-20 18:41:11.757687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.193 [2024-11-20 18:41:11.757695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:53.193 [2024-11-20 18:41:11.757704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:31:53.193 [2024-11-20 18:41:11.757710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.193 [2024-11-20 18:41:11.757731] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:53.193 [2024-11-20 18:41:11.758257] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:53.193 [2024-11-20 18:41:11.758277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.193 [2024-11-20 18:41:11.758283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:53.193 [2024-11-20 18:41:11.758292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.551 ms 00:31:53.193 [2024-11-20 18:41:11.758298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.193 [2024-11-20 18:41:11.758327] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID ceec07e5-4dff-4c79-a04f-9021d06f00b5 00:31:53.193 [2024-11-20 18:41:11.759639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.193 [2024-11-20 18:41:11.759673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:31:53.193 [2024-11-20 18:41:11.759683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:31:53.193 [2024-11-20 18:41:11.759692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.193 [2024-11-20 18:41:11.766649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.193 [2024-11-20 18:41:11.766679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:53.193 [2024-11-20 18:41:11.766689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.918 ms 00:31:53.193 [2024-11-20 18:41:11.766696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.193 [2024-11-20 18:41:11.766801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.193 [2024-11-20 18:41:11.766812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:53.193 [2024-11-20 18:41:11.766820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:31:53.193 [2024-11-20 18:41:11.766831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.193 [2024-11-20 18:41:11.766878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.193 [2024-11-20 18:41:11.766890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:53.193 [2024-11-20 18:41:11.766896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:31:53.193 [2024-11-20 18:41:11.766906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.193 [2024-11-20 18:41:11.766922] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:53.193 [2024-11-20 18:41:11.770210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.193 [2024-11-20 18:41:11.770247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:53.193 [2024-11-20 18:41:11.770258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.289 ms 00:31:53.193 [2024-11-20 18:41:11.770264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.193 [2024-11-20 18:41:11.770293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.193 [2024-11-20 18:41:11.770300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:53.193 [2024-11-20 18:41:11.770308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:31:53.193 [2024-11-20 18:41:11.770314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.193 [2024-11-20 18:41:11.770329] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:31:53.193 [2024-11-20 18:41:11.770439] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:53.193 [2024-11-20 18:41:11.770451] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:53.193 [2024-11-20 18:41:11.770460] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:53.193 [2024-11-20 18:41:11.770469] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:53.193 [2024-11-20 18:41:11.770476] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:53.193 [2024-11-20 18:41:11.770484] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:53.193 [2024-11-20 18:41:11.770490] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:53.193 [2024-11-20 18:41:11.770499] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:53.193 [2024-11-20 18:41:11.770505] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:53.193 [2024-11-20 18:41:11.770512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.193 [2024-11-20 18:41:11.770517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:53.193 [2024-11-20 18:41:11.770525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.185 ms 00:31:53.193 [2024-11-20 18:41:11.770536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.193 [2024-11-20 18:41:11.770603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.193 [2024-11-20 18:41:11.770609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:53.193 [2024-11-20 18:41:11.770616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:31:53.193 [2024-11-20 18:41:11.770622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.193 [2024-11-20 18:41:11.770703] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:53.193 [2024-11-20 18:41:11.770710] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:53.193 [2024-11-20 18:41:11.770718] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:53.193 [2024-11-20 18:41:11.770725] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:53.193 [2024-11-20 18:41:11.770733] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:53.193 [2024-11-20 18:41:11.770738] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:53.193 [2024-11-20 18:41:11.770746] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:53.193 [2024-11-20 18:41:11.770752] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:53.193 [2024-11-20 18:41:11.770761] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:53.193 [2024-11-20 18:41:11.770767] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:53.193 [2024-11-20 18:41:11.770773] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:53.193 [2024-11-20 18:41:11.770780] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:53.193 [2024-11-20 18:41:11.770786] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:53.194 [2024-11-20 18:41:11.770791] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:53.194 [2024-11-20 18:41:11.770798] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:53.194 [2024-11-20 18:41:11.770803] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:53.194 [2024-11-20 18:41:11.770811] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:53.194 [2024-11-20 18:41:11.770816] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:53.194 [2024-11-20 18:41:11.770824] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:53.194 [2024-11-20 18:41:11.770829] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:53.194 [2024-11-20 18:41:11.770835] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:53.194 [2024-11-20 18:41:11.770840] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:53.194 [2024-11-20 18:41:11.770847] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:53.194 [2024-11-20 18:41:11.770852] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:53.194 [2024-11-20 18:41:11.770858] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:53.194 [2024-11-20 18:41:11.770863] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:53.194 [2024-11-20 18:41:11.770869] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:53.194 [2024-11-20 18:41:11.770874] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:53.194 [2024-11-20 18:41:11.770880] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:53.194 [2024-11-20 18:41:11.770885] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:53.194 [2024-11-20 18:41:11.770892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:53.194 [2024-11-20 18:41:11.770898] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:53.194 [2024-11-20 18:41:11.770906] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:53.194 [2024-11-20 18:41:11.770911] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:53.194 [2024-11-20 18:41:11.770918] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:53.194 [2024-11-20 18:41:11.770923] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:53.194 [2024-11-20 18:41:11.770929] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:53.194 [2024-11-20 18:41:11.770934] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:53.194 [2024-11-20 18:41:11.770940] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:53.194 [2024-11-20 18:41:11.770945] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:53.194 [2024-11-20 18:41:11.770953] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:53.194 [2024-11-20 18:41:11.770958] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:53.194 [2024-11-20 18:41:11.770965] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:53.194 [2024-11-20 18:41:11.770970] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:53.194 [2024-11-20 18:41:11.770978] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:53.194 [2024-11-20 18:41:11.770984] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:53.194 [2024-11-20 18:41:11.770993] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:53.194 [2024-11-20 18:41:11.770999] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:53.194 [2024-11-20 18:41:11.771008] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:53.194 [2024-11-20 18:41:11.771013] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:53.194 [2024-11-20 18:41:11.771020] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:53.194 [2024-11-20 18:41:11.771025] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:53.194 [2024-11-20 18:41:11.771033] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:53.194 [2024-11-20 18:41:11.771041] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:53.194 [2024-11-20 18:41:11.771050] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:53.194 [2024-11-20 18:41:11.771058] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:53.194 [2024-11-20 18:41:11.771065] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:53.194 [2024-11-20 18:41:11.771070] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:53.194 [2024-11-20 18:41:11.771077] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:53.194 [2024-11-20 18:41:11.771083] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:53.194 [2024-11-20 18:41:11.771089] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:53.194 [2024-11-20 18:41:11.771106] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:53.194 [2024-11-20 18:41:11.771114] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:53.194 [2024-11-20 18:41:11.771119] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:53.194 [2024-11-20 18:41:11.771128] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:53.194 [2024-11-20 18:41:11.771134] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:53.194 [2024-11-20 18:41:11.771141] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:53.194 [2024-11-20 18:41:11.771147] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:53.194 [2024-11-20 18:41:11.771155] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:53.194 [2024-11-20 18:41:11.771183] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:53.194 [2024-11-20 18:41:11.771191] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:53.194 [2024-11-20 18:41:11.771198] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:53.194 [2024-11-20 18:41:11.771207] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:53.194 [2024-11-20 18:41:11.771213] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:53.194 [2024-11-20 18:41:11.771220] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:53.194 [2024-11-20 18:41:11.771225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.194 [2024-11-20 18:41:11.771233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:53.194 [2024-11-20 18:41:11.771239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.577 ms 00:31:53.194 [2024-11-20 18:41:11.771246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.194 [2024-11-20 18:41:11.771290] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:31:53.194 [2024-11-20 18:41:11.771303] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:31:57.400 [2024-11-20 18:41:15.389109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:57.400 [2024-11-20 18:41:15.389172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:31:57.400 [2024-11-20 18:41:15.389186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3617.806 ms 00:31:57.400 [2024-11-20 18:41:15.389195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:57.400 [2024-11-20 18:41:15.412810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:57.400 [2024-11-20 18:41:15.412856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:57.400 [2024-11-20 18:41:15.412868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.436 ms 00:31:57.400 [2024-11-20 18:41:15.412877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:57.400 [2024-11-20 18:41:15.412979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:57.400 [2024-11-20 18:41:15.412990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:57.400 [2024-11-20 18:41:15.412998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:31:57.400 [2024-11-20 18:41:15.413010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:57.400 [2024-11-20 18:41:15.439735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:57.400 [2024-11-20 18:41:15.439912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:57.400 [2024-11-20 18:41:15.439927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.696 ms 00:31:57.400 [2024-11-20 18:41:15.439936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:57.400 [2024-11-20 18:41:15.439962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:57.400 [2024-11-20 18:41:15.439975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:57.400 [2024-11-20 18:41:15.439982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:57.400 [2024-11-20 18:41:15.439990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:57.400 [2024-11-20 18:41:15.440422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:57.400 [2024-11-20 18:41:15.440439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:57.400 [2024-11-20 18:41:15.440448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.396 ms 00:31:57.400 [2024-11-20 18:41:15.440456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:57.400 [2024-11-20 18:41:15.440540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:57.400 [2024-11-20 18:41:15.440550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:57.400 [2024-11-20 18:41:15.440561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:31:57.400 [2024-11-20 18:41:15.440570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:57.400 [2024-11-20 18:41:15.453664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:57.400 [2024-11-20 18:41:15.453694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:57.400 [2024-11-20 18:41:15.453702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.080 ms 00:31:57.400 [2024-11-20 18:41:15.453710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:57.400 [2024-11-20 18:41:15.463756] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:57.400 [2024-11-20 18:41:15.466764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:57.400 [2024-11-20 18:41:15.466790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:57.400 [2024-11-20 18:41:15.466800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.996 ms 00:31:57.400 [2024-11-20 18:41:15.466806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:57.400 [2024-11-20 18:41:15.546452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:57.400 [2024-11-20 18:41:15.546578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:31:57.400 [2024-11-20 18:41:15.546597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 79.621 ms 00:31:57.400 [2024-11-20 18:41:15.546604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:57.400 [2024-11-20 18:41:15.546756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:57.400 [2024-11-20 18:41:15.546767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:57.400 [2024-11-20 18:41:15.546779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:31:57.400 [2024-11-20 18:41:15.546786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:57.400 [2024-11-20 18:41:15.565469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:57.400 [2024-11-20 18:41:15.565580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:31:57.400 [2024-11-20 18:41:15.565598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.645 ms 00:31:57.400 [2024-11-20 18:41:15.565606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:57.400 [2024-11-20 18:41:15.583706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:57.400 [2024-11-20 18:41:15.583733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:31:57.400 [2024-11-20 18:41:15.583745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.067 ms 00:31:57.400 [2024-11-20 18:41:15.583751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:57.400 [2024-11-20 18:41:15.584213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:57.400 [2024-11-20 18:41:15.584229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:57.400 [2024-11-20 18:41:15.584239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.433 ms 00:31:57.400 [2024-11-20 18:41:15.584245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:57.400 [2024-11-20 18:41:15.643358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:57.400 [2024-11-20 18:41:15.643469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:31:57.400 [2024-11-20 18:41:15.643489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.064 ms 00:31:57.400 [2024-11-20 18:41:15.643495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:57.401 [2024-11-20 18:41:15.663674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:57.401 [2024-11-20 18:41:15.663704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:31:57.401 [2024-11-20 18:41:15.663715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.124 ms 00:31:57.401 [2024-11-20 18:41:15.663723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:57.401 [2024-11-20 18:41:15.681791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:57.401 [2024-11-20 18:41:15.681816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:31:57.401 [2024-11-20 18:41:15.681826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.036 ms 00:31:57.401 [2024-11-20 18:41:15.681833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:57.401 [2024-11-20 18:41:15.701049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:57.401 [2024-11-20 18:41:15.701074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:57.401 [2024-11-20 18:41:15.701084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.184 ms 00:31:57.401 [2024-11-20 18:41:15.701090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:57.401 [2024-11-20 18:41:15.701134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:57.401 [2024-11-20 18:41:15.701142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:57.401 [2024-11-20 18:41:15.701154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:57.401 [2024-11-20 18:41:15.701160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:57.401 [2024-11-20 18:41:15.701224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:57.401 [2024-11-20 18:41:15.701232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:57.401 [2024-11-20 18:41:15.701242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:31:57.401 [2024-11-20 18:41:15.701249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:57.401 [2024-11-20 18:41:15.702446] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3944.598 ms, result 0 00:31:57.401 { 00:31:57.401 "name": "ftl0", 00:31:57.401 "uuid": "ceec07e5-4dff-4c79-a04f-9021d06f00b5" 00:31:57.401 } 00:31:57.401 18:41:15 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:31:57.401 18:41:15 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:31:57.401 18:41:15 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:31:57.401 18:41:15 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:31:57.664 [2024-11-20 18:41:16.105598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:57.664 [2024-11-20 18:41:16.105633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:57.664 [2024-11-20 18:41:16.105643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:57.664 [2024-11-20 18:41:16.105655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:57.664 [2024-11-20 18:41:16.105672] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:57.664 [2024-11-20 18:41:16.107950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:57.664 [2024-11-20 18:41:16.107973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:57.664 [2024-11-20 18:41:16.107984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.262 ms 00:31:57.664 [2024-11-20 18:41:16.107991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:57.664 [2024-11-20 18:41:16.108209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:57.664 [2024-11-20 18:41:16.108219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:57.664 [2024-11-20 18:41:16.108230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.196 ms 00:31:57.664 [2024-11-20 18:41:16.108236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:57.664 [2024-11-20 18:41:16.110683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:57.664 [2024-11-20 18:41:16.110700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:31:57.664 [2024-11-20 18:41:16.110710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.434 ms 00:31:57.664 [2024-11-20 18:41:16.110718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:57.664 [2024-11-20 18:41:16.115355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:57.664 [2024-11-20 18:41:16.115467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:31:57.664 [2024-11-20 18:41:16.115485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.621 ms 00:31:57.664 [2024-11-20 18:41:16.115491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:57.664 [2024-11-20 18:41:16.133584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:57.664 [2024-11-20 18:41:16.133609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:31:57.664 [2024-11-20 18:41:16.133618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.037 ms 00:31:57.664 [2024-11-20 18:41:16.133624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:57.664 [2024-11-20 18:41:16.147153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:57.664 [2024-11-20 18:41:16.147180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:31:57.664 [2024-11-20 18:41:16.147191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.495 ms 00:31:57.664 [2024-11-20 18:41:16.147198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:57.664 [2024-11-20 18:41:16.147314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:57.664 [2024-11-20 18:41:16.147323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:31:57.664 [2024-11-20 18:41:16.147332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:31:57.664 [2024-11-20 18:41:16.147338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:57.664 [2024-11-20 18:41:16.166005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:57.664 [2024-11-20 18:41:16.166112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:31:57.664 [2024-11-20 18:41:16.166128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.651 ms 00:31:57.664 [2024-11-20 18:41:16.166134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:57.664 [2024-11-20 18:41:16.184225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:57.664 [2024-11-20 18:41:16.184250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:31:57.664 [2024-11-20 18:41:16.184260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.062 ms 00:31:57.664 [2024-11-20 18:41:16.184265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:57.664 [2024-11-20 18:41:16.201915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:57.664 [2024-11-20 18:41:16.202009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:31:57.664 [2024-11-20 18:41:16.202024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.616 ms 00:31:57.664 [2024-11-20 18:41:16.202029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:57.664 [2024-11-20 18:41:16.219385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:57.664 [2024-11-20 18:41:16.219409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:31:57.664 [2024-11-20 18:41:16.219419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.299 ms 00:31:57.664 [2024-11-20 18:41:16.219425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:57.664 [2024-11-20 18:41:16.219454] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:57.664 [2024-11-20 18:41:16.219466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:31:57.664 [2024-11-20 18:41:16.219475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:57.664 [2024-11-20 18:41:16.219481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:57.664 [2024-11-20 18:41:16.219490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:57.664 [2024-11-20 18:41:16.219496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:57.664 [2024-11-20 18:41:16.219503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:57.664 [2024-11-20 18:41:16.219509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:57.664 [2024-11-20 18:41:16.219518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:57.664 [2024-11-20 18:41:16.219523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:57.664 [2024-11-20 18:41:16.219531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:57.664 [2024-11-20 18:41:16.219536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:57.664 [2024-11-20 18:41:16.219544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:57.664 [2024-11-20 18:41:16.219550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:57.664 [2024-11-20 18:41:16.219557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:57.664 [2024-11-20 18:41:16.219562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:57.664 [2024-11-20 18:41:16.219569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:57.664 [2024-11-20 18:41:16.219575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:57.664 [2024-11-20 18:41:16.219582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:57.664 [2024-11-20 18:41:16.219587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:57.664 [2024-11-20 18:41:16.219595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:57.664 [2024-11-20 18:41:16.219601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:57.664 [2024-11-20 18:41:16.219609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:57.664 [2024-11-20 18:41:16.219615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:57.664 [2024-11-20 18:41:16.219624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:57.664 [2024-11-20 18:41:16.219630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:57.664 [2024-11-20 18:41:16.219637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:57.664 [2024-11-20 18:41:16.219644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:57.664 [2024-11-20 18:41:16.219651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:57.664 [2024-11-20 18:41:16.219657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:57.664 [2024-11-20 18:41:16.219664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:57.664 [2024-11-20 18:41:16.219670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:57.664 [2024-11-20 18:41:16.219678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:57.665 [2024-11-20 18:41:16.219683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:57.665 [2024-11-20 18:41:16.219691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:57.665 [2024-11-20 18:41:16.219696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:57.665 [2024-11-20 18:41:16.219704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:57.665 [2024-11-20 18:41:16.219709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:57.665 [2024-11-20 18:41:16.219716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:57.665 [2024-11-20 18:41:16.219722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:57.665 [2024-11-20 18:41:16.219731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:57.665 [2024-11-20 18:41:16.219736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:57.665 [2024-11-20 18:41:16.219744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:57.665 [2024-11-20 18:41:16.219749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:57.665 [2024-11-20 18:41:16.219757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:57.665 [2024-11-20 18:41:16.219762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:57.665 [2024-11-20 18:41:16.219769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:57.665 [2024-11-20 18:41:16.219775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:57.665 [2024-11-20 18:41:16.219782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:57.665 [2024-11-20 18:41:16.219788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:57.665 [2024-11-20 18:41:16.219795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:57.665 [2024-11-20 18:41:16.219801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:57.665 [2024-11-20 18:41:16.219807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:57.665 [2024-11-20 18:41:16.219813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:57.665 [2024-11-20 18:41:16.219820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:57.665 [2024-11-20 18:41:16.219826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:57.665 [2024-11-20 18:41:16.219836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:57.665 [2024-11-20 18:41:16.219842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:57.665 [2024-11-20 18:41:16.219854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:57.665 [2024-11-20 18:41:16.219859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:57.665 [2024-11-20 18:41:16.219866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:57.665 [2024-11-20 18:41:16.219873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:57.665 [2024-11-20 18:41:16.219881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:57.665 [2024-11-20 18:41:16.219887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:57.665 [2024-11-20 18:41:16.219894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:57.665 [2024-11-20 18:41:16.219899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:57.665 [2024-11-20 18:41:16.219906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:57.665 [2024-11-20 18:41:16.219913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:57.665 [2024-11-20 18:41:16.219920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:57.665 [2024-11-20 18:41:16.219926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:57.665 [2024-11-20 18:41:16.219934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:57.665 [2024-11-20 18:41:16.219939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:57.665 [2024-11-20 18:41:16.219950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:57.665 [2024-11-20 18:41:16.219955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:57.665 [2024-11-20 18:41:16.219963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:57.665 [2024-11-20 18:41:16.219969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:57.665 [2024-11-20 18:41:16.219976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:57.665 [2024-11-20 18:41:16.219981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:57.665 [2024-11-20 18:41:16.219989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:57.665 [2024-11-20 18:41:16.219994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:57.665 [2024-11-20 18:41:16.220002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:57.665 [2024-11-20 18:41:16.220007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:57.665 [2024-11-20 18:41:16.220014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:57.665 [2024-11-20 18:41:16.220020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:57.665 [2024-11-20 18:41:16.220027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:57.665 [2024-11-20 18:41:16.220034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:57.665 [2024-11-20 18:41:16.220041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:57.665 [2024-11-20 18:41:16.220046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:57.665 [2024-11-20 18:41:16.220055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:57.665 [2024-11-20 18:41:16.220060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:57.665 [2024-11-20 18:41:16.220067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:57.665 [2024-11-20 18:41:16.220073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:57.665 [2024-11-20 18:41:16.220079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:57.665 [2024-11-20 18:41:16.220085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:57.665 [2024-11-20 18:41:16.220115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:57.665 [2024-11-20 18:41:16.220121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:57.665 [2024-11-20 18:41:16.220129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:57.665 [2024-11-20 18:41:16.220136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:57.665 [2024-11-20 18:41:16.220144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:57.665 [2024-11-20 18:41:16.220150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:57.665 [2024-11-20 18:41:16.220158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:57.665 [2024-11-20 18:41:16.220169] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:57.665 [2024-11-20 18:41:16.220180] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ceec07e5-4dff-4c79-a04f-9021d06f00b5 00:31:57.665 [2024-11-20 18:41:16.220186] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:31:57.665 [2024-11-20 18:41:16.220195] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:31:57.665 [2024-11-20 18:41:16.220200] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:31:57.665 [2024-11-20 18:41:16.220210] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:31:57.665 [2024-11-20 18:41:16.220217] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:57.665 [2024-11-20 18:41:16.220224] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:57.665 [2024-11-20 18:41:16.220229] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:57.665 [2024-11-20 18:41:16.220235] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:57.665 [2024-11-20 18:41:16.220241] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:57.665 [2024-11-20 18:41:16.220270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:57.665 [2024-11-20 18:41:16.220278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:57.665 [2024-11-20 18:41:16.220286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.817 ms 00:31:57.665 [2024-11-20 18:41:16.220291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:57.665 [2024-11-20 18:41:16.229761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:57.665 [2024-11-20 18:41:16.229784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:57.665 [2024-11-20 18:41:16.229794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.441 ms 00:31:57.665 [2024-11-20 18:41:16.229800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:57.665 [2024-11-20 18:41:16.230083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:57.665 [2024-11-20 18:41:16.230091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:57.665 [2024-11-20 18:41:16.230110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 00:31:57.665 [2024-11-20 18:41:16.230118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:57.665 [2024-11-20 18:41:16.265003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:57.665 [2024-11-20 18:41:16.265030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:57.665 [2024-11-20 18:41:16.265040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:57.666 [2024-11-20 18:41:16.265047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:57.666 [2024-11-20 18:41:16.265111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:57.666 [2024-11-20 18:41:16.265118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:57.666 [2024-11-20 18:41:16.265126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:57.666 [2024-11-20 18:41:16.265135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:57.666 [2024-11-20 18:41:16.265191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:57.666 [2024-11-20 18:41:16.265200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:57.666 [2024-11-20 18:41:16.265208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:57.666 [2024-11-20 18:41:16.265214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:57.666 [2024-11-20 18:41:16.265233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:57.666 [2024-11-20 18:41:16.265239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:57.666 [2024-11-20 18:41:16.265247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:57.666 [2024-11-20 18:41:16.265253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:57.926 [2024-11-20 18:41:16.328361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:57.926 [2024-11-20 18:41:16.328396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:57.926 [2024-11-20 18:41:16.328407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:57.926 [2024-11-20 18:41:16.328413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:57.926 [2024-11-20 18:41:16.379410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:57.926 [2024-11-20 18:41:16.379587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:57.926 [2024-11-20 18:41:16.379603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:57.926 [2024-11-20 18:41:16.379612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:57.926 [2024-11-20 18:41:16.379701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:57.926 [2024-11-20 18:41:16.379709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:57.926 [2024-11-20 18:41:16.379717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:57.926 [2024-11-20 18:41:16.379723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:57.926 [2024-11-20 18:41:16.379766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:57.926 [2024-11-20 18:41:16.379774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:57.926 [2024-11-20 18:41:16.379782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:57.926 [2024-11-20 18:41:16.379788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:57.926 [2024-11-20 18:41:16.379871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:57.926 [2024-11-20 18:41:16.379879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:57.926 [2024-11-20 18:41:16.379887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:57.926 [2024-11-20 18:41:16.379893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:57.926 [2024-11-20 18:41:16.379921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:57.926 [2024-11-20 18:41:16.379930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:57.926 [2024-11-20 18:41:16.379938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:57.926 [2024-11-20 18:41:16.379944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:57.926 [2024-11-20 18:41:16.379982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:57.926 [2024-11-20 18:41:16.379991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:57.926 [2024-11-20 18:41:16.379999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:57.926 [2024-11-20 18:41:16.380006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:57.926 [2024-11-20 18:41:16.380048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:57.926 [2024-11-20 18:41:16.380056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:57.926 [2024-11-20 18:41:16.380064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:57.926 [2024-11-20 18:41:16.380070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:57.926 [2024-11-20 18:41:16.380208] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 274.569 ms, result 0 00:31:57.926 true 00:31:57.927 18:41:16 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 84107 00:31:57.927 18:41:16 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 84107 ']' 00:31:57.927 18:41:16 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 84107 00:31:57.927 18:41:16 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # uname 00:31:57.927 18:41:16 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:57.927 18:41:16 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84107 00:31:57.927 killing process with pid 84107 00:31:57.927 18:41:16 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:57.927 18:41:16 ftl.ftl_restore_fast -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:57.927 18:41:16 ftl.ftl_restore_fast -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84107' 00:31:57.927 18:41:16 ftl.ftl_restore_fast -- common/autotest_common.sh@973 -- # kill 84107 00:31:57.927 18:41:16 ftl.ftl_restore_fast -- common/autotest_common.sh@978 -- # wait 84107 00:32:04.517 18:41:22 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:32:07.807 262144+0 records in 00:32:07.807 262144+0 records out 00:32:07.807 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.63128 s, 296 MB/s 00:32:07.807 18:41:25 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:32:09.184 18:41:27 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:32:09.184 [2024-11-20 18:41:27.518881] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:32:09.184 [2024-11-20 18:41:27.518973] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84336 ] 00:32:09.184 [2024-11-20 18:41:27.666728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:09.184 [2024-11-20 18:41:27.756911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:09.443 [2024-11-20 18:41:27.983400] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:09.443 [2024-11-20 18:41:27.983616] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:09.703 [2024-11-20 18:41:28.138923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.703 [2024-11-20 18:41:28.139060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:09.703 [2024-11-20 18:41:28.139083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:09.703 [2024-11-20 18:41:28.139090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.703 [2024-11-20 18:41:28.139148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.703 [2024-11-20 18:41:28.139157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:09.703 [2024-11-20 18:41:28.139166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:32:09.703 [2024-11-20 18:41:28.139172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.703 [2024-11-20 18:41:28.139187] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:09.703 [2024-11-20 18:41:28.139748] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:09.703 [2024-11-20 18:41:28.139761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.703 [2024-11-20 18:41:28.139767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:09.703 [2024-11-20 18:41:28.139774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.577 ms 00:32:09.703 [2024-11-20 18:41:28.139781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.703 [2024-11-20 18:41:28.141030] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:32:09.703 [2024-11-20 18:41:28.151533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.703 [2024-11-20 18:41:28.151560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:09.703 [2024-11-20 18:41:28.151570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.505 ms 00:32:09.703 [2024-11-20 18:41:28.151577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.703 [2024-11-20 18:41:28.151623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.703 [2024-11-20 18:41:28.151631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:09.703 [2024-11-20 18:41:28.151637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:32:09.703 [2024-11-20 18:41:28.151643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.703 [2024-11-20 18:41:28.157876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.704 [2024-11-20 18:41:28.157996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:09.704 [2024-11-20 18:41:28.158009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.192 ms 00:32:09.704 [2024-11-20 18:41:28.158015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.704 [2024-11-20 18:41:28.158079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.704 [2024-11-20 18:41:28.158086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:09.704 [2024-11-20 18:41:28.158105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:32:09.704 [2024-11-20 18:41:28.158112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.704 [2024-11-20 18:41:28.158160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.704 [2024-11-20 18:41:28.158168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:09.704 [2024-11-20 18:41:28.158175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:32:09.704 [2024-11-20 18:41:28.158182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.704 [2024-11-20 18:41:28.158200] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:09.704 [2024-11-20 18:41:28.161107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.704 [2024-11-20 18:41:28.161129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:09.704 [2024-11-20 18:41:28.161137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.911 ms 00:32:09.704 [2024-11-20 18:41:28.161145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.704 [2024-11-20 18:41:28.161172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.704 [2024-11-20 18:41:28.161179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:09.704 [2024-11-20 18:41:28.161186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:32:09.704 [2024-11-20 18:41:28.161191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.704 [2024-11-20 18:41:28.161207] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:09.704 [2024-11-20 18:41:28.161224] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:32:09.704 [2024-11-20 18:41:28.161254] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:09.704 [2024-11-20 18:41:28.161269] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:32:09.704 [2024-11-20 18:41:28.161354] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:09.704 [2024-11-20 18:41:28.161363] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:09.704 [2024-11-20 18:41:28.161371] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:09.704 [2024-11-20 18:41:28.161379] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:09.704 [2024-11-20 18:41:28.161387] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:09.704 [2024-11-20 18:41:28.161395] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:32:09.704 [2024-11-20 18:41:28.161401] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:09.704 [2024-11-20 18:41:28.161407] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:09.704 [2024-11-20 18:41:28.161413] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:09.704 [2024-11-20 18:41:28.161421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.704 [2024-11-20 18:41:28.161428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:09.704 [2024-11-20 18:41:28.161434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.216 ms 00:32:09.704 [2024-11-20 18:41:28.161441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.704 [2024-11-20 18:41:28.161506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.704 [2024-11-20 18:41:28.161514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:09.704 [2024-11-20 18:41:28.161519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:32:09.704 [2024-11-20 18:41:28.161525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.704 [2024-11-20 18:41:28.161602] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:09.704 [2024-11-20 18:41:28.161613] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:09.704 [2024-11-20 18:41:28.161620] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:09.704 [2024-11-20 18:41:28.161626] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:09.704 [2024-11-20 18:41:28.161633] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:09.704 [2024-11-20 18:41:28.161639] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:09.704 [2024-11-20 18:41:28.161645] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:32:09.704 [2024-11-20 18:41:28.161652] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:09.704 [2024-11-20 18:41:28.161657] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:32:09.704 [2024-11-20 18:41:28.161663] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:09.704 [2024-11-20 18:41:28.161668] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:09.704 [2024-11-20 18:41:28.161676] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:32:09.704 [2024-11-20 18:41:28.161681] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:09.704 [2024-11-20 18:41:28.161686] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:09.704 [2024-11-20 18:41:28.161692] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:32:09.704 [2024-11-20 18:41:28.161702] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:09.704 [2024-11-20 18:41:28.161708] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:09.704 [2024-11-20 18:41:28.161713] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:32:09.704 [2024-11-20 18:41:28.161717] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:09.704 [2024-11-20 18:41:28.161723] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:09.704 [2024-11-20 18:41:28.161728] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:32:09.704 [2024-11-20 18:41:28.161733] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:09.704 [2024-11-20 18:41:28.161739] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:09.704 [2024-11-20 18:41:28.161744] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:32:09.704 [2024-11-20 18:41:28.161748] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:09.704 [2024-11-20 18:41:28.161754] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:09.704 [2024-11-20 18:41:28.161758] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:32:09.704 [2024-11-20 18:41:28.161763] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:09.704 [2024-11-20 18:41:28.161768] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:09.704 [2024-11-20 18:41:28.161773] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:32:09.704 [2024-11-20 18:41:28.161778] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:09.704 [2024-11-20 18:41:28.161783] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:09.704 [2024-11-20 18:41:28.161788] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:32:09.704 [2024-11-20 18:41:28.161793] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:09.704 [2024-11-20 18:41:28.161798] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:09.704 [2024-11-20 18:41:28.161803] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:32:09.704 [2024-11-20 18:41:28.161809] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:09.704 [2024-11-20 18:41:28.161814] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:09.704 [2024-11-20 18:41:28.161819] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:32:09.704 [2024-11-20 18:41:28.161824] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:09.704 [2024-11-20 18:41:28.161829] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:09.704 [2024-11-20 18:41:28.161833] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:32:09.704 [2024-11-20 18:41:28.161840] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:09.704 [2024-11-20 18:41:28.161847] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:09.704 [2024-11-20 18:41:28.161854] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:09.704 [2024-11-20 18:41:28.161860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:09.704 [2024-11-20 18:41:28.161865] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:09.704 [2024-11-20 18:41:28.161872] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:09.704 [2024-11-20 18:41:28.161877] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:09.704 [2024-11-20 18:41:28.161883] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:09.704 [2024-11-20 18:41:28.161888] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:09.704 [2024-11-20 18:41:28.161893] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:09.704 [2024-11-20 18:41:28.161898] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:09.704 [2024-11-20 18:41:28.161904] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:09.704 [2024-11-20 18:41:28.161912] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:09.704 [2024-11-20 18:41:28.161918] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:32:09.704 [2024-11-20 18:41:28.161923] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:32:09.704 [2024-11-20 18:41:28.161929] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:32:09.704 [2024-11-20 18:41:28.161935] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:32:09.704 [2024-11-20 18:41:28.161942] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:32:09.704 [2024-11-20 18:41:28.161947] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:32:09.704 [2024-11-20 18:41:28.161952] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:32:09.705 [2024-11-20 18:41:28.161957] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:32:09.705 [2024-11-20 18:41:28.161962] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:32:09.705 [2024-11-20 18:41:28.161968] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:32:09.705 [2024-11-20 18:41:28.161973] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:32:09.705 [2024-11-20 18:41:28.161978] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:32:09.705 [2024-11-20 18:41:28.161983] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:32:09.705 [2024-11-20 18:41:28.161989] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:32:09.705 [2024-11-20 18:41:28.161994] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:09.705 [2024-11-20 18:41:28.162002] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:09.705 [2024-11-20 18:41:28.162009] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:09.705 [2024-11-20 18:41:28.162014] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:09.705 [2024-11-20 18:41:28.162020] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:09.705 [2024-11-20 18:41:28.162026] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:09.705 [2024-11-20 18:41:28.162033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.705 [2024-11-20 18:41:28.162040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:09.705 [2024-11-20 18:41:28.162045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.484 ms 00:32:09.705 [2024-11-20 18:41:28.162051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.705 [2024-11-20 18:41:28.186108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.705 [2024-11-20 18:41:28.186143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:09.705 [2024-11-20 18:41:28.186152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.016 ms 00:32:09.705 [2024-11-20 18:41:28.186159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.705 [2024-11-20 18:41:28.186232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.705 [2024-11-20 18:41:28.186238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:09.705 [2024-11-20 18:41:28.186244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:32:09.705 [2024-11-20 18:41:28.186250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.705 [2024-11-20 18:41:28.228921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.705 [2024-11-20 18:41:28.229049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:09.705 [2024-11-20 18:41:28.229064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.632 ms 00:32:09.705 [2024-11-20 18:41:28.229072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.705 [2024-11-20 18:41:28.229118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.705 [2024-11-20 18:41:28.229127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:09.705 [2024-11-20 18:41:28.229134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:09.705 [2024-11-20 18:41:28.229143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.705 [2024-11-20 18:41:28.229554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.705 [2024-11-20 18:41:28.229568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:09.705 [2024-11-20 18:41:28.229576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.369 ms 00:32:09.705 [2024-11-20 18:41:28.229582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.705 [2024-11-20 18:41:28.229690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.705 [2024-11-20 18:41:28.229698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:09.705 [2024-11-20 18:41:28.229705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:32:09.705 [2024-11-20 18:41:28.229711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.705 [2024-11-20 18:41:28.241549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.705 [2024-11-20 18:41:28.241573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:09.705 [2024-11-20 18:41:28.241582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.818 ms 00:32:09.705 [2024-11-20 18:41:28.241590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.705 [2024-11-20 18:41:28.252174] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:32:09.705 [2024-11-20 18:41:28.252202] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:09.705 [2024-11-20 18:41:28.252212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.705 [2024-11-20 18:41:28.252219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:09.705 [2024-11-20 18:41:28.252226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.531 ms 00:32:09.705 [2024-11-20 18:41:28.252232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.705 [2024-11-20 18:41:28.270924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.705 [2024-11-20 18:41:28.270961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:09.705 [2024-11-20 18:41:28.270974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.662 ms 00:32:09.705 [2024-11-20 18:41:28.270981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.705 [2024-11-20 18:41:28.280218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.705 [2024-11-20 18:41:28.280249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:09.705 [2024-11-20 18:41:28.280257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.206 ms 00:32:09.705 [2024-11-20 18:41:28.280262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.705 [2024-11-20 18:41:28.289006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.705 [2024-11-20 18:41:28.289029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:09.705 [2024-11-20 18:41:28.289037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.718 ms 00:32:09.705 [2024-11-20 18:41:28.289043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.705 [2024-11-20 18:41:28.289514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.705 [2024-11-20 18:41:28.289527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:09.705 [2024-11-20 18:41:28.289535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.402 ms 00:32:09.705 [2024-11-20 18:41:28.289541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.964 [2024-11-20 18:41:28.337823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.964 [2024-11-20 18:41:28.337856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:09.964 [2024-11-20 18:41:28.337867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.266 ms 00:32:09.964 [2024-11-20 18:41:28.337878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.964 [2024-11-20 18:41:28.346324] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:32:09.964 [2024-11-20 18:41:28.348812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.964 [2024-11-20 18:41:28.348835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:09.964 [2024-11-20 18:41:28.348845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.899 ms 00:32:09.964 [2024-11-20 18:41:28.348853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.964 [2024-11-20 18:41:28.348915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.965 [2024-11-20 18:41:28.348923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:09.965 [2024-11-20 18:41:28.348931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:32:09.965 [2024-11-20 18:41:28.348937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.965 [2024-11-20 18:41:28.349011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.965 [2024-11-20 18:41:28.349020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:09.965 [2024-11-20 18:41:28.349028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:32:09.965 [2024-11-20 18:41:28.349034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.965 [2024-11-20 18:41:28.349050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.965 [2024-11-20 18:41:28.349057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:09.965 [2024-11-20 18:41:28.349064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:09.965 [2024-11-20 18:41:28.349070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.965 [2024-11-20 18:41:28.349112] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:09.965 [2024-11-20 18:41:28.349121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.965 [2024-11-20 18:41:28.349129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:09.965 [2024-11-20 18:41:28.349136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:32:09.965 [2024-11-20 18:41:28.349142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.965 [2024-11-20 18:41:28.368010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.965 [2024-11-20 18:41:28.368038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:09.965 [2024-11-20 18:41:28.368046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.854 ms 00:32:09.965 [2024-11-20 18:41:28.368053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.965 [2024-11-20 18:41:28.368123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.965 [2024-11-20 18:41:28.368131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:09.965 [2024-11-20 18:41:28.368138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:32:09.965 [2024-11-20 18:41:28.368145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.965 [2024-11-20 18:41:28.369409] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 230.089 ms, result 0 00:32:10.905  [2024-11-20T18:41:30.473Z] Copying: 20/1024 [MB] (20 MBps) [2024-11-20T18:41:31.412Z] Copying: 40/1024 [MB] (20 MBps) [2024-11-20T18:41:32.791Z] Copying: 59/1024 [MB] (19 MBps) [2024-11-20T18:41:33.427Z] Copying: 79/1024 [MB] (19 MBps) [2024-11-20T18:41:34.813Z] Copying: 101/1024 [MB] (22 MBps) [2024-11-20T18:41:35.750Z] Copying: 118/1024 [MB] (16 MBps) [2024-11-20T18:41:36.688Z] Copying: 142/1024 [MB] (23 MBps) [2024-11-20T18:41:37.629Z] Copying: 166/1024 [MB] (24 MBps) [2024-11-20T18:41:38.569Z] Copying: 186/1024 [MB] (20 MBps) [2024-11-20T18:41:39.507Z] Copying: 208/1024 [MB] (21 MBps) [2024-11-20T18:41:40.450Z] Copying: 231/1024 [MB] (23 MBps) [2024-11-20T18:41:41.391Z] Copying: 246/1024 [MB] (15 MBps) [2024-11-20T18:41:42.772Z] Copying: 262/1024 [MB] (15 MBps) [2024-11-20T18:41:43.711Z] Copying: 274/1024 [MB] (11 MBps) [2024-11-20T18:41:44.648Z] Copying: 289124/1048576 [kB] (7756 kBps) [2024-11-20T18:41:45.588Z] Copying: 294/1024 [MB] (11 MBps) [2024-11-20T18:41:46.525Z] Copying: 305/1024 [MB] (11 MBps) [2024-11-20T18:41:47.463Z] Copying: 316/1024 [MB] (10 MBps) [2024-11-20T18:41:48.402Z] Copying: 327/1024 [MB] (11 MBps) [2024-11-20T18:41:49.789Z] Copying: 339/1024 [MB] (11 MBps) [2024-11-20T18:41:50.728Z] Copying: 351/1024 [MB] (12 MBps) [2024-11-20T18:41:51.666Z] Copying: 362/1024 [MB] (11 MBps) [2024-11-20T18:41:52.604Z] Copying: 373/1024 [MB] (11 MBps) [2024-11-20T18:41:53.540Z] Copying: 385/1024 [MB] (11 MBps) [2024-11-20T18:41:54.478Z] Copying: 396/1024 [MB] (11 MBps) [2024-11-20T18:41:55.419Z] Copying: 407/1024 [MB] (11 MBps) [2024-11-20T18:41:56.797Z] Copying: 418/1024 [MB] (10 MBps) [2024-11-20T18:41:57.737Z] Copying: 430/1024 [MB] (11 MBps) [2024-11-20T18:41:58.672Z] Copying: 442/1024 [MB] (11 MBps) [2024-11-20T18:41:59.611Z] Copying: 453/1024 [MB] (11 MBps) [2024-11-20T18:42:00.549Z] Copying: 464/1024 [MB] (11 MBps) [2024-11-20T18:42:01.490Z] Copying: 477/1024 [MB] (12 MBps) [2024-11-20T18:42:02.430Z] Copying: 487/1024 [MB] (10 MBps) [2024-11-20T18:42:03.811Z] Copying: 509800/1048576 [kB] (10152 kBps) [2024-11-20T18:42:04.748Z] Copying: 508/1024 [MB] (10 MBps) [2024-11-20T18:42:05.685Z] Copying: 519/1024 [MB] (11 MBps) [2024-11-20T18:42:06.623Z] Copying: 531/1024 [MB] (11 MBps) [2024-11-20T18:42:07.561Z] Copying: 542/1024 [MB] (11 MBps) [2024-11-20T18:42:08.583Z] Copying: 554/1024 [MB] (11 MBps) [2024-11-20T18:42:09.520Z] Copying: 565/1024 [MB] (11 MBps) [2024-11-20T18:42:10.455Z] Copying: 576/1024 [MB] (11 MBps) [2024-11-20T18:42:11.395Z] Copying: 588/1024 [MB] (11 MBps) [2024-11-20T18:42:12.775Z] Copying: 599/1024 [MB] (10 MBps) [2024-11-20T18:42:13.720Z] Copying: 610/1024 [MB] (11 MBps) [2024-11-20T18:42:14.664Z] Copying: 621/1024 [MB] (11 MBps) [2024-11-20T18:42:15.607Z] Copying: 632/1024 [MB] (10 MBps) [2024-11-20T18:42:16.551Z] Copying: 643/1024 [MB] (10 MBps) [2024-11-20T18:42:17.493Z] Copying: 654/1024 [MB] (11 MBps) [2024-11-20T18:42:18.437Z] Copying: 666/1024 [MB] (11 MBps) [2024-11-20T18:42:19.824Z] Copying: 677/1024 [MB] (11 MBps) [2024-11-20T18:42:20.394Z] Copying: 687/1024 [MB] (10 MBps) [2024-11-20T18:42:21.780Z] Copying: 698/1024 [MB] (10 MBps) [2024-11-20T18:42:22.721Z] Copying: 710/1024 [MB] (11 MBps) [2024-11-20T18:42:23.665Z] Copying: 721/1024 [MB] (11 MBps) [2024-11-20T18:42:24.609Z] Copying: 732/1024 [MB] (11 MBps) [2024-11-20T18:42:25.552Z] Copying: 744/1024 [MB] (11 MBps) [2024-11-20T18:42:26.496Z] Copying: 755/1024 [MB] (11 MBps) [2024-11-20T18:42:27.439Z] Copying: 767/1024 [MB] (11 MBps) [2024-11-20T18:42:28.383Z] Copying: 778/1024 [MB] (11 MBps) [2024-11-20T18:42:29.773Z] Copying: 789/1024 [MB] (11 MBps) [2024-11-20T18:42:30.719Z] Copying: 801/1024 [MB] (11 MBps) [2024-11-20T18:42:31.664Z] Copying: 811/1024 [MB] (10 MBps) [2024-11-20T18:42:32.609Z] Copying: 822/1024 [MB] (10 MBps) [2024-11-20T18:42:33.554Z] Copying: 833/1024 [MB] (10 MBps) [2024-11-20T18:42:34.499Z] Copying: 844/1024 [MB] (11 MBps) [2024-11-20T18:42:35.444Z] Copying: 855/1024 [MB] (11 MBps) [2024-11-20T18:42:36.388Z] Copying: 865/1024 [MB] (10 MBps) [2024-11-20T18:42:37.773Z] Copying: 876/1024 [MB] (11 MBps) [2024-11-20T18:42:38.718Z] Copying: 888/1024 [MB] (11 MBps) [2024-11-20T18:42:39.661Z] Copying: 898/1024 [MB] (10 MBps) [2024-11-20T18:42:40.602Z] Copying: 910/1024 [MB] (11 MBps) [2024-11-20T18:42:41.546Z] Copying: 921/1024 [MB] (11 MBps) [2024-11-20T18:42:42.584Z] Copying: 933/1024 [MB] (11 MBps) [2024-11-20T18:42:43.526Z] Copying: 944/1024 [MB] (11 MBps) [2024-11-20T18:42:44.469Z] Copying: 956/1024 [MB] (11 MBps) [2024-11-20T18:42:45.414Z] Copying: 968/1024 [MB] (12 MBps) [2024-11-20T18:42:46.800Z] Copying: 979/1024 [MB] (11 MBps) [2024-11-20T18:42:47.742Z] Copying: 991/1024 [MB] (11 MBps) [2024-11-20T18:42:48.686Z] Copying: 1002/1024 [MB] (11 MBps) [2024-11-20T18:42:49.632Z] Copying: 1013/1024 [MB] (11 MBps) [2024-11-20T18:42:49.632Z] Copying: 1024/1024 [MB] (average 12 MBps)[2024-11-20 18:42:49.298914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:31.003 [2024-11-20 18:42:49.298961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:33:31.003 [2024-11-20 18:42:49.298973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:33:31.003 [2024-11-20 18:42:49.298980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.003 [2024-11-20 18:42:49.298997] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:31.003 [2024-11-20 18:42:49.301359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:31.003 [2024-11-20 18:42:49.301462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:33:31.003 [2024-11-20 18:42:49.301513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.348 ms 00:33:31.003 [2024-11-20 18:42:49.301531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.003 [2024-11-20 18:42:49.303340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:31.003 [2024-11-20 18:42:49.303432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:33:31.003 [2024-11-20 18:42:49.303479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.776 ms 00:33:31.003 [2024-11-20 18:42:49.303498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.003 [2024-11-20 18:42:49.303530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:31.003 [2024-11-20 18:42:49.303546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:33:31.003 [2024-11-20 18:42:49.303555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:33:31.003 [2024-11-20 18:42:49.303561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.003 [2024-11-20 18:42:49.303602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:31.003 [2024-11-20 18:42:49.303611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:33:31.003 [2024-11-20 18:42:49.303617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:33:31.003 [2024-11-20 18:42:49.303623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.003 [2024-11-20 18:42:49.303634] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:33:31.003 [2024-11-20 18:42:49.303644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:33:31.003 [2024-11-20 18:42:49.303653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:33:31.003 [2024-11-20 18:42:49.303659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:33:31.003 [2024-11-20 18:42:49.303665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:31.003 [2024-11-20 18:42:49.303670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:31.003 [2024-11-20 18:42:49.303676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:31.003 [2024-11-20 18:42:49.303682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:31.003 [2024-11-20 18:42:49.303687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:31.003 [2024-11-20 18:42:49.303693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:31.003 [2024-11-20 18:42:49.303699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:31.003 [2024-11-20 18:42:49.303705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:31.003 [2024-11-20 18:42:49.303711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:31.003 [2024-11-20 18:42:49.303716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:31.003 [2024-11-20 18:42:49.303722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:31.003 [2024-11-20 18:42:49.303728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:31.003 [2024-11-20 18:42:49.303734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:31.003 [2024-11-20 18:42:49.303740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:31.003 [2024-11-20 18:42:49.303746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:31.003 [2024-11-20 18:42:49.303751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:33:31.003 [2024-11-20 18:42:49.303757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:33:31.003 [2024-11-20 18:42:49.303762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:33:31.003 [2024-11-20 18:42:49.303768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:33:31.003 [2024-11-20 18:42:49.303774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:33:31.003 [2024-11-20 18:42:49.303780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:33:31.003 [2024-11-20 18:42:49.303787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:33:31.003 [2024-11-20 18:42:49.303792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:33:31.003 [2024-11-20 18:42:49.303798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:33:31.003 [2024-11-20 18:42:49.303803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:33:31.003 [2024-11-20 18:42:49.303809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:33:31.003 [2024-11-20 18:42:49.303814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:33:31.003 [2024-11-20 18:42:49.303820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:33:31.003 [2024-11-20 18:42:49.303833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:33:31.003 [2024-11-20 18:42:49.303838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:33:31.003 [2024-11-20 18:42:49.303844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:33:31.003 [2024-11-20 18:42:49.303850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:33:31.003 [2024-11-20 18:42:49.303855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:33:31.003 [2024-11-20 18:42:49.303861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:33:31.003 [2024-11-20 18:42:49.303866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:33:31.003 [2024-11-20 18:42:49.303872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:33:31.003 [2024-11-20 18:42:49.303878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:33:31.003 [2024-11-20 18:42:49.303883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:33:31.003 [2024-11-20 18:42:49.303891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:33:31.003 [2024-11-20 18:42:49.303897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:33:31.003 [2024-11-20 18:42:49.303904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:33:31.003 [2024-11-20 18:42:49.303910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:33:31.003 [2024-11-20 18:42:49.303916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:33:31.003 [2024-11-20 18:42:49.303921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:33:31.003 [2024-11-20 18:42:49.303927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:33:31.004 [2024-11-20 18:42:49.303932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:33:31.004 [2024-11-20 18:42:49.303938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:33:31.004 [2024-11-20 18:42:49.303943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:33:31.004 [2024-11-20 18:42:49.303949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:33:31.004 [2024-11-20 18:42:49.303955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:33:31.004 [2024-11-20 18:42:49.303962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:33:31.004 [2024-11-20 18:42:49.303968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:33:31.004 [2024-11-20 18:42:49.303975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:33:31.004 [2024-11-20 18:42:49.303981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:33:31.004 [2024-11-20 18:42:49.303986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:33:31.004 [2024-11-20 18:42:49.303991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:33:31.004 [2024-11-20 18:42:49.303997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:33:31.004 [2024-11-20 18:42:49.304002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:33:31.004 [2024-11-20 18:42:49.304008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:33:31.004 [2024-11-20 18:42:49.304013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:33:31.004 [2024-11-20 18:42:49.304019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:33:31.004 [2024-11-20 18:42:49.304024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:33:31.004 [2024-11-20 18:42:49.304030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:33:31.004 [2024-11-20 18:42:49.304036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:33:31.004 [2024-11-20 18:42:49.304042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:33:31.004 [2024-11-20 18:42:49.304047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:33:31.004 [2024-11-20 18:42:49.304052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:33:31.004 [2024-11-20 18:42:49.304058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:33:31.004 [2024-11-20 18:42:49.304063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:33:31.004 [2024-11-20 18:42:49.304069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:33:31.004 [2024-11-20 18:42:49.304075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:33:31.004 [2024-11-20 18:42:49.304080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:33:31.004 [2024-11-20 18:42:49.304086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:33:31.004 [2024-11-20 18:42:49.304092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:33:31.004 [2024-11-20 18:42:49.304108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:33:31.004 [2024-11-20 18:42:49.304114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:33:31.004 [2024-11-20 18:42:49.304119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:33:31.004 [2024-11-20 18:42:49.304125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:33:31.004 [2024-11-20 18:42:49.304131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:33:31.004 [2024-11-20 18:42:49.304137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:33:31.004 [2024-11-20 18:42:49.304143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:33:31.004 [2024-11-20 18:42:49.304148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:33:31.004 [2024-11-20 18:42:49.304154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:33:31.004 [2024-11-20 18:42:49.304161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:33:31.004 [2024-11-20 18:42:49.304167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:33:31.004 [2024-11-20 18:42:49.304172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:33:31.004 [2024-11-20 18:42:49.304178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:33:31.004 [2024-11-20 18:42:49.304183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:33:31.004 [2024-11-20 18:42:49.304189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:33:31.004 [2024-11-20 18:42:49.304195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:33:31.004 [2024-11-20 18:42:49.304201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:33:31.004 [2024-11-20 18:42:49.304207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:33:31.004 [2024-11-20 18:42:49.304213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:33:31.004 [2024-11-20 18:42:49.304219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:33:31.004 [2024-11-20 18:42:49.304224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:33:31.004 [2024-11-20 18:42:49.304230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:33:31.004 [2024-11-20 18:42:49.304235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:33:31.004 [2024-11-20 18:42:49.304258] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:33:31.004 [2024-11-20 18:42:49.304264] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ceec07e5-4dff-4c79-a04f-9021d06f00b5 00:33:31.004 [2024-11-20 18:42:49.304270] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:33:31.004 [2024-11-20 18:42:49.304276] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:33:31.004 [2024-11-20 18:42:49.304281] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:33:31.004 [2024-11-20 18:42:49.304288] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:33:31.004 [2024-11-20 18:42:49.304299] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:33:31.004 [2024-11-20 18:42:49.304305] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:33:31.004 [2024-11-20 18:42:49.304311] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:33:31.004 [2024-11-20 18:42:49.304316] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:33:31.004 [2024-11-20 18:42:49.304321] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:33:31.004 [2024-11-20 18:42:49.304326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:31.004 [2024-11-20 18:42:49.304332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:33:31.004 [2024-11-20 18:42:49.304338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.693 ms 00:33:31.004 [2024-11-20 18:42:49.304343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.004 [2024-11-20 18:42:49.314466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:31.004 [2024-11-20 18:42:49.314560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:33:31.004 [2024-11-20 18:42:49.314576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.112 ms 00:33:31.004 [2024-11-20 18:42:49.314582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.004 [2024-11-20 18:42:49.314869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:31.004 [2024-11-20 18:42:49.314881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:33:31.004 [2024-11-20 18:42:49.314889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:33:31.004 [2024-11-20 18:42:49.314895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.004 [2024-11-20 18:42:49.342511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:31.004 [2024-11-20 18:42:49.342543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:31.004 [2024-11-20 18:42:49.342551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:31.004 [2024-11-20 18:42:49.342558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.004 [2024-11-20 18:42:49.342601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:31.004 [2024-11-20 18:42:49.342608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:31.004 [2024-11-20 18:42:49.342614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:31.004 [2024-11-20 18:42:49.342620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.004 [2024-11-20 18:42:49.342655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:31.004 [2024-11-20 18:42:49.342662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:31.004 [2024-11-20 18:42:49.342671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:31.004 [2024-11-20 18:42:49.342677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.004 [2024-11-20 18:42:49.342689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:31.004 [2024-11-20 18:42:49.342695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:31.004 [2024-11-20 18:42:49.342701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:31.004 [2024-11-20 18:42:49.342711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.004 [2024-11-20 18:42:49.406384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:31.004 [2024-11-20 18:42:49.406417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:31.004 [2024-11-20 18:42:49.406430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:31.004 [2024-11-20 18:42:49.406437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.004 [2024-11-20 18:42:49.456946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:31.004 [2024-11-20 18:42:49.456984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:31.004 [2024-11-20 18:42:49.456996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:31.004 [2024-11-20 18:42:49.457004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.005 [2024-11-20 18:42:49.457075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:31.005 [2024-11-20 18:42:49.457083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:31.005 [2024-11-20 18:42:49.457090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:31.005 [2024-11-20 18:42:49.457111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.005 [2024-11-20 18:42:49.457144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:31.005 [2024-11-20 18:42:49.457152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:31.005 [2024-11-20 18:42:49.457159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:31.005 [2024-11-20 18:42:49.457165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.005 [2024-11-20 18:42:49.457229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:31.005 [2024-11-20 18:42:49.457238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:31.005 [2024-11-20 18:42:49.457244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:31.005 [2024-11-20 18:42:49.457251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.005 [2024-11-20 18:42:49.457280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:31.005 [2024-11-20 18:42:49.457307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:33:31.005 [2024-11-20 18:42:49.457315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:31.005 [2024-11-20 18:42:49.457321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.005 [2024-11-20 18:42:49.457354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:31.005 [2024-11-20 18:42:49.457360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:31.005 [2024-11-20 18:42:49.457367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:31.005 [2024-11-20 18:42:49.457374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.005 [2024-11-20 18:42:49.457412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:31.005 [2024-11-20 18:42:49.457421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:31.005 [2024-11-20 18:42:49.457428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:31.005 [2024-11-20 18:42:49.457434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.005 [2024-11-20 18:42:49.457540] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 158.592 ms, result 0 00:33:31.576 00:33:31.576 00:33:31.837 18:42:50 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:33:31.837 [2024-11-20 18:42:50.276897] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:33:31.837 [2024-11-20 18:42:50.277052] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85172 ] 00:33:31.837 [2024-11-20 18:42:50.441158] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:32.098 [2024-11-20 18:42:50.574880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:32.359 [2024-11-20 18:42:50.901607] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:32.359 [2024-11-20 18:42:50.901699] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:32.623 [2024-11-20 18:42:51.065486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:32.623 [2024-11-20 18:42:51.065547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:33:32.623 [2024-11-20 18:42:51.065572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:33:32.623 [2024-11-20 18:42:51.065582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:32.623 [2024-11-20 18:42:51.065640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:32.623 [2024-11-20 18:42:51.065652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:32.623 [2024-11-20 18:42:51.065665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:33:32.623 [2024-11-20 18:42:51.065674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:32.623 [2024-11-20 18:42:51.065697] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:33:32.623 [2024-11-20 18:42:51.066486] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:33:32.623 [2024-11-20 18:42:51.066510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:32.623 [2024-11-20 18:42:51.066520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:32.623 [2024-11-20 18:42:51.066530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.820 ms 00:33:32.623 [2024-11-20 18:42:51.066541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:32.623 [2024-11-20 18:42:51.066842] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:33:32.623 [2024-11-20 18:42:51.066872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:32.623 [2024-11-20 18:42:51.066883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:33:32.623 [2024-11-20 18:42:51.066899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:33:32.623 [2024-11-20 18:42:51.066910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:32.623 [2024-11-20 18:42:51.066973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:32.623 [2024-11-20 18:42:51.066985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:33:32.623 [2024-11-20 18:42:51.066993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:33:32.623 [2024-11-20 18:42:51.067001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:32.623 [2024-11-20 18:42:51.067345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:32.623 [2024-11-20 18:42:51.067362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:32.623 [2024-11-20 18:42:51.067373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:33:32.623 [2024-11-20 18:42:51.067382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:32.623 [2024-11-20 18:42:51.067454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:32.623 [2024-11-20 18:42:51.067466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:32.623 [2024-11-20 18:42:51.067474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:33:32.623 [2024-11-20 18:42:51.067483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:32.623 [2024-11-20 18:42:51.067509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:32.623 [2024-11-20 18:42:51.067518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:33:32.623 [2024-11-20 18:42:51.067527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:33:32.623 [2024-11-20 18:42:51.067538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:32.623 [2024-11-20 18:42:51.067560] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:33:32.623 [2024-11-20 18:42:51.072439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:32.623 [2024-11-20 18:42:51.072739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:32.623 [2024-11-20 18:42:51.072761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.884 ms 00:33:32.623 [2024-11-20 18:42:51.072771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:32.623 [2024-11-20 18:42:51.072811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:32.623 [2024-11-20 18:42:51.072820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:33:32.623 [2024-11-20 18:42:51.072829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:33:32.623 [2024-11-20 18:42:51.072837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:32.623 [2024-11-20 18:42:51.072900] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:33:32.623 [2024-11-20 18:42:51.072933] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:33:32.623 [2024-11-20 18:42:51.072978] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:33:32.623 [2024-11-20 18:42:51.072997] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:33:32.623 [2024-11-20 18:42:51.073127] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:33:32.623 [2024-11-20 18:42:51.073143] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:33:32.623 [2024-11-20 18:42:51.073156] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:33:32.623 [2024-11-20 18:42:51.073168] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:33:32.623 [2024-11-20 18:42:51.073178] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:33:32.623 [2024-11-20 18:42:51.073187] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:33:32.623 [2024-11-20 18:42:51.073200] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:33:32.623 [2024-11-20 18:42:51.073210] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:33:32.623 [2024-11-20 18:42:51.073219] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:33:32.623 [2024-11-20 18:42:51.073228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:32.623 [2024-11-20 18:42:51.073236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:33:32.623 [2024-11-20 18:42:51.073244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.332 ms 00:33:32.623 [2024-11-20 18:42:51.073252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:32.623 [2024-11-20 18:42:51.073339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:32.623 [2024-11-20 18:42:51.073349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:33:32.623 [2024-11-20 18:42:51.073357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:33:32.623 [2024-11-20 18:42:51.073368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:32.624 [2024-11-20 18:42:51.073474] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:33:32.624 [2024-11-20 18:42:51.073486] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:33:32.624 [2024-11-20 18:42:51.073495] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:32.624 [2024-11-20 18:42:51.073504] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:32.624 [2024-11-20 18:42:51.073515] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:33:32.624 [2024-11-20 18:42:51.073523] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:33:32.624 [2024-11-20 18:42:51.073532] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:33:32.624 [2024-11-20 18:42:51.073540] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:33:32.624 [2024-11-20 18:42:51.073549] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:33:32.624 [2024-11-20 18:42:51.073556] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:32.624 [2024-11-20 18:42:51.073563] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:33:32.624 [2024-11-20 18:42:51.073573] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:33:32.624 [2024-11-20 18:42:51.073581] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:32.624 [2024-11-20 18:42:51.073590] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:33:32.624 [2024-11-20 18:42:51.073598] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:33:32.624 [2024-11-20 18:42:51.073606] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:32.624 [2024-11-20 18:42:51.073613] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:33:32.624 [2024-11-20 18:42:51.073626] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:33:32.624 [2024-11-20 18:42:51.073633] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:32.624 [2024-11-20 18:42:51.073641] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:33:32.624 [2024-11-20 18:42:51.073648] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:33:32.624 [2024-11-20 18:42:51.073655] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:32.624 [2024-11-20 18:42:51.073661] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:33:32.624 [2024-11-20 18:42:51.073668] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:33:32.624 [2024-11-20 18:42:51.073675] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:32.624 [2024-11-20 18:42:51.073681] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:33:32.624 [2024-11-20 18:42:51.073688] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:33:32.624 [2024-11-20 18:42:51.073694] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:32.624 [2024-11-20 18:42:51.073701] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:33:32.624 [2024-11-20 18:42:51.073707] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:33:32.624 [2024-11-20 18:42:51.073714] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:32.624 [2024-11-20 18:42:51.073722] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:33:32.624 [2024-11-20 18:42:51.073729] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:33:32.624 [2024-11-20 18:42:51.073735] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:32.624 [2024-11-20 18:42:51.073756] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:33:32.624 [2024-11-20 18:42:51.073763] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:33:32.624 [2024-11-20 18:42:51.073770] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:32.624 [2024-11-20 18:42:51.073777] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:33:32.624 [2024-11-20 18:42:51.073784] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:33:32.624 [2024-11-20 18:42:51.073791] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:32.624 [2024-11-20 18:42:51.073797] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:33:32.624 [2024-11-20 18:42:51.073804] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:33:32.624 [2024-11-20 18:42:51.073811] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:32.624 [2024-11-20 18:42:51.073823] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:33:32.624 [2024-11-20 18:42:51.073833] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:33:32.624 [2024-11-20 18:42:51.073841] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:32.624 [2024-11-20 18:42:51.073850] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:32.624 [2024-11-20 18:42:51.073859] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:33:32.624 [2024-11-20 18:42:51.073867] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:33:32.624 [2024-11-20 18:42:51.073874] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:33:32.624 [2024-11-20 18:42:51.073881] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:33:32.624 [2024-11-20 18:42:51.073888] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:33:32.624 [2024-11-20 18:42:51.073895] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:33:32.624 [2024-11-20 18:42:51.073904] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:33:32.624 [2024-11-20 18:42:51.073919] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:32.624 [2024-11-20 18:42:51.073927] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:33:32.624 [2024-11-20 18:42:51.073935] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:33:32.624 [2024-11-20 18:42:51.073942] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:33:32.624 [2024-11-20 18:42:51.073949] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:33:32.624 [2024-11-20 18:42:51.073956] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:33:32.624 [2024-11-20 18:42:51.073963] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:33:32.624 [2024-11-20 18:42:51.073970] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:33:32.624 [2024-11-20 18:42:51.073978] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:33:32.624 [2024-11-20 18:42:51.073987] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:33:32.624 [2024-11-20 18:42:51.073994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:33:32.624 [2024-11-20 18:42:51.074001] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:33:32.624 [2024-11-20 18:42:51.074008] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:33:32.624 [2024-11-20 18:42:51.074015] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:33:32.624 [2024-11-20 18:42:51.074023] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:33:32.624 [2024-11-20 18:42:51.074030] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:33:32.624 [2024-11-20 18:42:51.074038] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:32.624 [2024-11-20 18:42:51.074049] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:32.624 [2024-11-20 18:42:51.074057] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:33:32.624 [2024-11-20 18:42:51.074064] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:33:32.624 [2024-11-20 18:42:51.074071] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:33:32.624 [2024-11-20 18:42:51.074079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:32.624 [2024-11-20 18:42:51.074087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:33:32.624 [2024-11-20 18:42:51.074112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.675 ms 00:33:32.624 [2024-11-20 18:42:51.074120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:32.624 [2024-11-20 18:42:51.105736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:32.624 [2024-11-20 18:42:51.105807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:32.624 [2024-11-20 18:42:51.105820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.564 ms 00:33:32.624 [2024-11-20 18:42:51.105829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:32.624 [2024-11-20 18:42:51.105916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:32.624 [2024-11-20 18:42:51.105925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:33:32.624 [2024-11-20 18:42:51.105934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:33:32.624 [2024-11-20 18:42:51.105947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:32.624 [2024-11-20 18:42:51.159500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:32.624 [2024-11-20 18:42:51.159556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:32.624 [2024-11-20 18:42:51.159570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.497 ms 00:33:32.624 [2024-11-20 18:42:51.159580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:32.624 [2024-11-20 18:42:51.159633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:32.624 [2024-11-20 18:42:51.159645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:32.624 [2024-11-20 18:42:51.159656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:33:32.624 [2024-11-20 18:42:51.159664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:32.624 [2024-11-20 18:42:51.159798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:32.624 [2024-11-20 18:42:51.159812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:32.624 [2024-11-20 18:42:51.159823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:33:32.624 [2024-11-20 18:42:51.159831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:32.625 [2024-11-20 18:42:51.159975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:32.625 [2024-11-20 18:42:51.159990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:32.625 [2024-11-20 18:42:51.160001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:33:32.625 [2024-11-20 18:42:51.160011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:32.625 [2024-11-20 18:42:51.178029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:32.625 [2024-11-20 18:42:51.178074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:32.625 [2024-11-20 18:42:51.178087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.998 ms 00:33:32.625 [2024-11-20 18:42:51.178116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:32.625 [2024-11-20 18:42:51.178257] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:33:32.625 [2024-11-20 18:42:51.178274] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:33:32.625 [2024-11-20 18:42:51.178286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:32.625 [2024-11-20 18:42:51.178298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:33:32.625 [2024-11-20 18:42:51.178309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:33:32.625 [2024-11-20 18:42:51.178318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:32.625 [2024-11-20 18:42:51.190625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:32.625 [2024-11-20 18:42:51.190667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:33:32.625 [2024-11-20 18:42:51.190680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.291 ms 00:33:32.625 [2024-11-20 18:42:51.190688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:32.625 [2024-11-20 18:42:51.190825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:32.625 [2024-11-20 18:42:51.190834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:33:32.625 [2024-11-20 18:42:51.190843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:33:32.625 [2024-11-20 18:42:51.190857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:32.625 [2024-11-20 18:42:51.190906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:32.625 [2024-11-20 18:42:51.190917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:33:32.625 [2024-11-20 18:42:51.190925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:33:32.625 [2024-11-20 18:42:51.190934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:32.625 [2024-11-20 18:42:51.191595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:32.625 [2024-11-20 18:42:51.191612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:33:32.625 [2024-11-20 18:42:51.191622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.612 ms 00:33:32.625 [2024-11-20 18:42:51.191630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:32.625 [2024-11-20 18:42:51.191648] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:33:32.625 [2024-11-20 18:42:51.191667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:32.625 [2024-11-20 18:42:51.191675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:33:32.625 [2024-11-20 18:42:51.191684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:33:32.625 [2024-11-20 18:42:51.191691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:32.625 [2024-11-20 18:42:51.205425] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:33:32.625 [2024-11-20 18:42:51.205588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:32.625 [2024-11-20 18:42:51.205600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:33:32.625 [2024-11-20 18:42:51.205611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.878 ms 00:33:32.625 [2024-11-20 18:42:51.205620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:32.625 [2024-11-20 18:42:51.207793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:32.625 [2024-11-20 18:42:51.207832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:33:32.625 [2024-11-20 18:42:51.207843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.147 ms 00:33:32.625 [2024-11-20 18:42:51.207853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:32.625 [2024-11-20 18:42:51.207955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:32.625 [2024-11-20 18:42:51.207966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:33:32.625 [2024-11-20 18:42:51.207976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:33:32.625 [2024-11-20 18:42:51.207985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:32.625 [2024-11-20 18:42:51.208012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:32.625 [2024-11-20 18:42:51.208023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:33:32.625 [2024-11-20 18:42:51.208038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:33:32.625 [2024-11-20 18:42:51.208046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:32.625 [2024-11-20 18:42:51.208083] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:33:32.625 [2024-11-20 18:42:51.208114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:32.625 [2024-11-20 18:42:51.208125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:33:32.625 [2024-11-20 18:42:51.208134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:33:32.625 [2024-11-20 18:42:51.208142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:32.625 [2024-11-20 18:42:51.235601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:32.625 [2024-11-20 18:42:51.235657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:33:32.625 [2024-11-20 18:42:51.235671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.437 ms 00:33:32.625 [2024-11-20 18:42:51.235680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:32.625 [2024-11-20 18:42:51.235771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:32.625 [2024-11-20 18:42:51.235782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:33:32.625 [2024-11-20 18:42:51.235792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:33:32.625 [2024-11-20 18:42:51.235800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:32.625 [2024-11-20 18:42:51.237216] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 171.172 ms, result 0 00:33:34.015  [2024-11-20T18:42:53.587Z] Copying: 10/1024 [MB] (10 MBps) [2024-11-20T18:42:54.531Z] Copying: 22/1024 [MB] (12 MBps) [2024-11-20T18:42:55.475Z] Copying: 34/1024 [MB] (11 MBps) [2024-11-20T18:42:56.862Z] Copying: 46/1024 [MB] (11 MBps) [2024-11-20T18:42:57.806Z] Copying: 57/1024 [MB] (11 MBps) [2024-11-20T18:42:58.748Z] Copying: 74/1024 [MB] (16 MBps) [2024-11-20T18:42:59.692Z] Copying: 86/1024 [MB] (11 MBps) [2024-11-20T18:43:00.634Z] Copying: 98/1024 [MB] (11 MBps) [2024-11-20T18:43:01.577Z] Copying: 109/1024 [MB] (11 MBps) [2024-11-20T18:43:02.521Z] Copying: 121/1024 [MB] (11 MBps) [2024-11-20T18:43:03.463Z] Copying: 133/1024 [MB] (11 MBps) [2024-11-20T18:43:04.851Z] Copying: 145/1024 [MB] (11 MBps) [2024-11-20T18:43:05.797Z] Copying: 157/1024 [MB] (11 MBps) [2024-11-20T18:43:06.741Z] Copying: 167/1024 [MB] (10 MBps) [2024-11-20T18:43:07.685Z] Copying: 178/1024 [MB] (11 MBps) [2024-11-20T18:43:08.629Z] Copying: 191/1024 [MB] (12 MBps) [2024-11-20T18:43:09.575Z] Copying: 202/1024 [MB] (11 MBps) [2024-11-20T18:43:10.520Z] Copying: 213/1024 [MB] (11 MBps) [2024-11-20T18:43:11.465Z] Copying: 224/1024 [MB] (11 MBps) [2024-11-20T18:43:12.854Z] Copying: 237/1024 [MB] (12 MBps) [2024-11-20T18:43:13.800Z] Copying: 248/1024 [MB] (11 MBps) [2024-11-20T18:43:14.744Z] Copying: 260/1024 [MB] (11 MBps) [2024-11-20T18:43:15.690Z] Copying: 272/1024 [MB] (11 MBps) [2024-11-20T18:43:16.714Z] Copying: 283/1024 [MB] (11 MBps) [2024-11-20T18:43:17.675Z] Copying: 294/1024 [MB] (11 MBps) [2024-11-20T18:43:18.620Z] Copying: 306/1024 [MB] (11 MBps) [2024-11-20T18:43:19.561Z] Copying: 317/1024 [MB] (10 MBps) [2024-11-20T18:43:20.504Z] Copying: 329/1024 [MB] (11 MBps) [2024-11-20T18:43:21.447Z] Copying: 340/1024 [MB] (11 MBps) [2024-11-20T18:43:22.835Z] Copying: 352/1024 [MB] (11 MBps) [2024-11-20T18:43:23.780Z] Copying: 364/1024 [MB] (12 MBps) [2024-11-20T18:43:24.724Z] Copying: 376/1024 [MB] (11 MBps) [2024-11-20T18:43:25.669Z] Copying: 388/1024 [MB] (11 MBps) [2024-11-20T18:43:26.615Z] Copying: 399/1024 [MB] (11 MBps) [2024-11-20T18:43:27.610Z] Copying: 411/1024 [MB] (11 MBps) [2024-11-20T18:43:28.555Z] Copying: 422/1024 [MB] (10 MBps) [2024-11-20T18:43:29.497Z] Copying: 433/1024 [MB] (11 MBps) [2024-11-20T18:43:30.441Z] Copying: 444/1024 [MB] (11 MBps) [2024-11-20T18:43:31.828Z] Copying: 456/1024 [MB] (11 MBps) [2024-11-20T18:43:32.774Z] Copying: 468/1024 [MB] (11 MBps) [2024-11-20T18:43:33.717Z] Copying: 481/1024 [MB] (12 MBps) [2024-11-20T18:43:34.662Z] Copying: 492/1024 [MB] (11 MBps) [2024-11-20T18:43:35.607Z] Copying: 510/1024 [MB] (17 MBps) [2024-11-20T18:43:36.553Z] Copying: 521/1024 [MB] (11 MBps) [2024-11-20T18:43:37.495Z] Copying: 532/1024 [MB] (11 MBps) [2024-11-20T18:43:38.438Z] Copying: 544/1024 [MB] (11 MBps) [2024-11-20T18:43:39.834Z] Copying: 555/1024 [MB] (11 MBps) [2024-11-20T18:43:40.778Z] Copying: 566/1024 [MB] (11 MBps) [2024-11-20T18:43:41.724Z] Copying: 578/1024 [MB] (11 MBps) [2024-11-20T18:43:42.669Z] Copying: 590/1024 [MB] (11 MBps) [2024-11-20T18:43:43.614Z] Copying: 602/1024 [MB] (11 MBps) [2024-11-20T18:43:44.560Z] Copying: 614/1024 [MB] (11 MBps) [2024-11-20T18:43:45.506Z] Copying: 626/1024 [MB] (11 MBps) [2024-11-20T18:43:46.453Z] Copying: 637/1024 [MB] (11 MBps) [2024-11-20T18:43:47.842Z] Copying: 648/1024 [MB] (10 MBps) [2024-11-20T18:43:48.788Z] Copying: 659/1024 [MB] (11 MBps) [2024-11-20T18:43:49.733Z] Copying: 669/1024 [MB] (10 MBps) [2024-11-20T18:43:50.677Z] Copying: 680/1024 [MB] (10 MBps) [2024-11-20T18:43:51.704Z] Copying: 690/1024 [MB] (10 MBps) [2024-11-20T18:43:52.649Z] Copying: 701/1024 [MB] (10 MBps) [2024-11-20T18:43:53.592Z] Copying: 712/1024 [MB] (11 MBps) [2024-11-20T18:43:54.535Z] Copying: 723/1024 [MB] (11 MBps) [2024-11-20T18:43:55.480Z] Copying: 735/1024 [MB] (11 MBps) [2024-11-20T18:43:56.867Z] Copying: 746/1024 [MB] (11 MBps) [2024-11-20T18:43:57.441Z] Copying: 757/1024 [MB] (10 MBps) [2024-11-20T18:43:58.827Z] Copying: 769/1024 [MB] (11 MBps) [2024-11-20T18:43:59.792Z] Copying: 780/1024 [MB] (11 MBps) [2024-11-20T18:44:00.734Z] Copying: 790/1024 [MB] (10 MBps) [2024-11-20T18:44:01.677Z] Copying: 801/1024 [MB] (10 MBps) [2024-11-20T18:44:02.620Z] Copying: 813/1024 [MB] (11 MBps) [2024-11-20T18:44:03.563Z] Copying: 824/1024 [MB] (11 MBps) [2024-11-20T18:44:04.507Z] Copying: 836/1024 [MB] (11 MBps) [2024-11-20T18:44:05.452Z] Copying: 848/1024 [MB] (11 MBps) [2024-11-20T18:44:06.838Z] Copying: 859/1024 [MB] (11 MBps) [2024-11-20T18:44:07.782Z] Copying: 871/1024 [MB] (11 MBps) [2024-11-20T18:44:08.725Z] Copying: 882/1024 [MB] (10 MBps) [2024-11-20T18:44:09.669Z] Copying: 893/1024 [MB] (11 MBps) [2024-11-20T18:44:10.613Z] Copying: 905/1024 [MB] (11 MBps) [2024-11-20T18:44:11.557Z] Copying: 916/1024 [MB] (11 MBps) [2024-11-20T18:44:12.501Z] Copying: 928/1024 [MB] (11 MBps) [2024-11-20T18:44:13.446Z] Copying: 939/1024 [MB] (11 MBps) [2024-11-20T18:44:14.832Z] Copying: 950/1024 [MB] (11 MBps) [2024-11-20T18:44:15.773Z] Copying: 962/1024 [MB] (11 MBps) [2024-11-20T18:44:16.715Z] Copying: 973/1024 [MB] (11 MBps) [2024-11-20T18:44:17.656Z] Copying: 984/1024 [MB] (11 MBps) [2024-11-20T18:44:18.600Z] Copying: 996/1024 [MB] (11 MBps) [2024-11-20T18:44:19.545Z] Copying: 1007/1024 [MB] (11 MBps) [2024-11-20T18:44:20.117Z] Copying: 1018/1024 [MB] (11 MBps) [2024-11-20T18:44:20.117Z] Copying: 1024/1024 [MB] (average 11 MBps)[2024-11-20 18:44:20.045846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:01.488 [2024-11-20 18:44:20.045929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:35:01.488 [2024-11-20 18:44:20.045956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:35:01.488 [2024-11-20 18:44:20.046334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:01.488 [2024-11-20 18:44:20.046362] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:35:01.488 [2024-11-20 18:44:20.050103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:01.488 [2024-11-20 18:44:20.050138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:35:01.488 [2024-11-20 18:44:20.050151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.722 ms 00:35:01.488 [2024-11-20 18:44:20.050161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:01.488 [2024-11-20 18:44:20.050432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:01.488 [2024-11-20 18:44:20.050445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:35:01.488 [2024-11-20 18:44:20.050457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.244 ms 00:35:01.488 [2024-11-20 18:44:20.050467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:01.488 [2024-11-20 18:44:20.050500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:01.488 [2024-11-20 18:44:20.050515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:35:01.488 [2024-11-20 18:44:20.050525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:35:01.488 [2024-11-20 18:44:20.050535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:01.488 [2024-11-20 18:44:20.050593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:01.488 [2024-11-20 18:44:20.050604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:35:01.488 [2024-11-20 18:44:20.050615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:35:01.488 [2024-11-20 18:44:20.050624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:01.488 [2024-11-20 18:44:20.050641] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:35:01.488 [2024-11-20 18:44:20.050656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:35:01.488 [2024-11-20 18:44:20.050667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:35:01.488 [2024-11-20 18:44:20.050678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:35:01.488 [2024-11-20 18:44:20.050687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:35:01.488 [2024-11-20 18:44:20.050696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:35:01.488 [2024-11-20 18:44:20.050705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:35:01.488 [2024-11-20 18:44:20.050714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:35:01.488 [2024-11-20 18:44:20.050723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:35:01.488 [2024-11-20 18:44:20.050732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:35:01.488 [2024-11-20 18:44:20.050741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:35:01.488 [2024-11-20 18:44:20.050753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:35:01.488 [2024-11-20 18:44:20.050763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:35:01.488 [2024-11-20 18:44:20.050773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:35:01.488 [2024-11-20 18:44:20.050782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:35:01.488 [2024-11-20 18:44:20.050791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:35:01.488 [2024-11-20 18:44:20.050800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:35:01.488 [2024-11-20 18:44:20.050809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:35:01.488 [2024-11-20 18:44:20.050819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:35:01.488 [2024-11-20 18:44:20.050828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:35:01.488 [2024-11-20 18:44:20.050837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:35:01.488 [2024-11-20 18:44:20.050848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:35:01.488 [2024-11-20 18:44:20.050856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:35:01.488 [2024-11-20 18:44:20.050866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:35:01.488 [2024-11-20 18:44:20.050874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:35:01.488 [2024-11-20 18:44:20.050885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:35:01.488 [2024-11-20 18:44:20.050894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:35:01.488 [2024-11-20 18:44:20.050903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:35:01.488 [2024-11-20 18:44:20.050913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:35:01.488 [2024-11-20 18:44:20.050923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:35:01.488 [2024-11-20 18:44:20.050931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:35:01.488 [2024-11-20 18:44:20.050941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:35:01.488 [2024-11-20 18:44:20.050950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:35:01.488 [2024-11-20 18:44:20.050959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:35:01.488 [2024-11-20 18:44:20.050969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:35:01.488 [2024-11-20 18:44:20.050978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:35:01.489 [2024-11-20 18:44:20.050987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:35:01.489 [2024-11-20 18:44:20.050996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:35:01.489 [2024-11-20 18:44:20.051005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:35:01.489 [2024-11-20 18:44:20.051014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:35:01.489 [2024-11-20 18:44:20.051023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:35:01.489 [2024-11-20 18:44:20.051032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:35:01.489 [2024-11-20 18:44:20.051041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:35:01.489 [2024-11-20 18:44:20.051051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:35:01.489 [2024-11-20 18:44:20.051061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:35:01.489 [2024-11-20 18:44:20.051070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:35:01.489 [2024-11-20 18:44:20.051079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:35:01.489 [2024-11-20 18:44:20.051089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:35:01.489 [2024-11-20 18:44:20.051128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:35:01.489 [2024-11-20 18:44:20.051137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:35:01.489 [2024-11-20 18:44:20.051146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:35:01.489 [2024-11-20 18:44:20.051156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:35:01.489 [2024-11-20 18:44:20.051166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:35:01.489 [2024-11-20 18:44:20.051175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:35:01.489 [2024-11-20 18:44:20.051184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:35:01.489 [2024-11-20 18:44:20.051194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:35:01.489 [2024-11-20 18:44:20.051203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:35:01.489 [2024-11-20 18:44:20.051214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:35:01.489 [2024-11-20 18:44:20.051223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:35:01.489 [2024-11-20 18:44:20.051242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:35:01.489 [2024-11-20 18:44:20.051251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:35:01.489 [2024-11-20 18:44:20.051260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:35:01.489 [2024-11-20 18:44:20.051269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:35:01.489 [2024-11-20 18:44:20.051278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:35:01.489 [2024-11-20 18:44:20.051287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:35:01.489 [2024-11-20 18:44:20.051297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:35:01.489 [2024-11-20 18:44:20.051306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:35:01.489 [2024-11-20 18:44:20.051315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:35:01.489 [2024-11-20 18:44:20.051324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:35:01.489 [2024-11-20 18:44:20.051333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:35:01.489 [2024-11-20 18:44:20.051342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:35:01.489 [2024-11-20 18:44:20.051351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:35:01.489 [2024-11-20 18:44:20.051360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:35:01.489 [2024-11-20 18:44:20.051370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:35:01.489 [2024-11-20 18:44:20.051379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:35:01.489 [2024-11-20 18:44:20.051390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:35:01.489 [2024-11-20 18:44:20.051399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:35:01.489 [2024-11-20 18:44:20.051408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:35:01.489 [2024-11-20 18:44:20.051422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:35:01.489 [2024-11-20 18:44:20.051433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:35:01.489 [2024-11-20 18:44:20.051442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:35:01.489 [2024-11-20 18:44:20.051452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:35:01.489 [2024-11-20 18:44:20.051460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:35:01.489 [2024-11-20 18:44:20.051474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:35:01.489 [2024-11-20 18:44:20.051484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:35:01.489 [2024-11-20 18:44:20.051493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:35:01.489 [2024-11-20 18:44:20.051502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:35:01.489 [2024-11-20 18:44:20.051511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:35:01.489 [2024-11-20 18:44:20.051521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:35:01.489 [2024-11-20 18:44:20.051531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:35:01.489 [2024-11-20 18:44:20.051541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:35:01.489 [2024-11-20 18:44:20.051550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:35:01.489 [2024-11-20 18:44:20.051559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:35:01.489 [2024-11-20 18:44:20.051568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:35:01.489 [2024-11-20 18:44:20.051577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:35:01.489 [2024-11-20 18:44:20.051586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:35:01.489 [2024-11-20 18:44:20.051597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:35:01.489 [2024-11-20 18:44:20.051607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:35:01.489 [2024-11-20 18:44:20.051615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:35:01.489 [2024-11-20 18:44:20.051624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:35:01.489 [2024-11-20 18:44:20.051634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:35:01.489 [2024-11-20 18:44:20.051653] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:35:01.489 [2024-11-20 18:44:20.051662] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ceec07e5-4dff-4c79-a04f-9021d06f00b5 00:35:01.489 [2024-11-20 18:44:20.051675] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:35:01.489 [2024-11-20 18:44:20.051684] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:35:01.489 [2024-11-20 18:44:20.051693] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:35:01.489 [2024-11-20 18:44:20.051702] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:35:01.489 [2024-11-20 18:44:20.051711] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:35:01.489 [2024-11-20 18:44:20.051721] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:35:01.489 [2024-11-20 18:44:20.051730] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:35:01.489 [2024-11-20 18:44:20.051738] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:35:01.489 [2024-11-20 18:44:20.051746] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:35:01.489 [2024-11-20 18:44:20.051754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:01.489 [2024-11-20 18:44:20.051764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:35:01.489 [2024-11-20 18:44:20.051774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.114 ms 00:35:01.489 [2024-11-20 18:44:20.051783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:01.489 [2024-11-20 18:44:20.064879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:01.489 [2024-11-20 18:44:20.064908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:35:01.489 [2024-11-20 18:44:20.064918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.077 ms 00:35:01.489 [2024-11-20 18:44:20.064925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:01.490 [2024-11-20 18:44:20.065235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:01.490 [2024-11-20 18:44:20.065248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:35:01.490 [2024-11-20 18:44:20.065256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.291 ms 00:35:01.490 [2024-11-20 18:44:20.065265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:01.490 [2024-11-20 18:44:20.093102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:01.490 [2024-11-20 18:44:20.093130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:35:01.490 [2024-11-20 18:44:20.093138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:01.490 [2024-11-20 18:44:20.093145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:01.490 [2024-11-20 18:44:20.093199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:01.490 [2024-11-20 18:44:20.093206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:35:01.490 [2024-11-20 18:44:20.093213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:01.490 [2024-11-20 18:44:20.093223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:01.490 [2024-11-20 18:44:20.093265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:01.490 [2024-11-20 18:44:20.093273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:35:01.490 [2024-11-20 18:44:20.093279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:01.490 [2024-11-20 18:44:20.093286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:01.490 [2024-11-20 18:44:20.093299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:01.490 [2024-11-20 18:44:20.093305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:35:01.490 [2024-11-20 18:44:20.093312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:01.490 [2024-11-20 18:44:20.093319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:01.749 [2024-11-20 18:44:20.156316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:01.749 [2024-11-20 18:44:20.156352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:35:01.749 [2024-11-20 18:44:20.156361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:01.749 [2024-11-20 18:44:20.156368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:01.750 [2024-11-20 18:44:20.207469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:01.750 [2024-11-20 18:44:20.207667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:35:01.750 [2024-11-20 18:44:20.207682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:01.750 [2024-11-20 18:44:20.207689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:01.750 [2024-11-20 18:44:20.207766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:01.750 [2024-11-20 18:44:20.207774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:35:01.750 [2024-11-20 18:44:20.207781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:01.750 [2024-11-20 18:44:20.207788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:01.750 [2024-11-20 18:44:20.207819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:01.750 [2024-11-20 18:44:20.207827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:35:01.750 [2024-11-20 18:44:20.207834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:01.750 [2024-11-20 18:44:20.207841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:01.750 [2024-11-20 18:44:20.207905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:01.750 [2024-11-20 18:44:20.207913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:35:01.750 [2024-11-20 18:44:20.207920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:01.750 [2024-11-20 18:44:20.207926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:01.750 [2024-11-20 18:44:20.207946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:01.750 [2024-11-20 18:44:20.207953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:35:01.750 [2024-11-20 18:44:20.207961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:01.750 [2024-11-20 18:44:20.207967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:01.750 [2024-11-20 18:44:20.208002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:01.750 [2024-11-20 18:44:20.208011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:35:01.750 [2024-11-20 18:44:20.208017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:01.750 [2024-11-20 18:44:20.208025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:01.750 [2024-11-20 18:44:20.208061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:01.750 [2024-11-20 18:44:20.208069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:35:01.750 [2024-11-20 18:44:20.208075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:01.750 [2024-11-20 18:44:20.208082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:01.750 [2024-11-20 18:44:20.208205] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 162.337 ms, result 0 00:35:02.321 00:35:02.321 00:35:02.321 18:44:20 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:35:04.868 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:35:04.868 18:44:22 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:35:04.868 [2024-11-20 18:44:23.000073] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:35:04.868 [2024-11-20 18:44:23.000203] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86102 ] 00:35:04.868 [2024-11-20 18:44:23.153674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:04.868 [2024-11-20 18:44:23.241645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:04.868 [2024-11-20 18:44:23.468832] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:35:04.868 [2024-11-20 18:44:23.468884] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:35:05.130 [2024-11-20 18:44:23.624138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:05.130 [2024-11-20 18:44:23.624175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:35:05.130 [2024-11-20 18:44:23.624191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:35:05.130 [2024-11-20 18:44:23.624197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:05.130 [2024-11-20 18:44:23.624237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:05.130 [2024-11-20 18:44:23.624245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:35:05.131 [2024-11-20 18:44:23.624253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:35:05.131 [2024-11-20 18:44:23.624259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:05.131 [2024-11-20 18:44:23.624272] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:35:05.131 [2024-11-20 18:44:23.624787] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:35:05.131 [2024-11-20 18:44:23.624800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:05.131 [2024-11-20 18:44:23.624807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:35:05.131 [2024-11-20 18:44:23.624813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.532 ms 00:35:05.131 [2024-11-20 18:44:23.624819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:05.131 [2024-11-20 18:44:23.625016] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:35:05.131 [2024-11-20 18:44:23.625035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:05.131 [2024-11-20 18:44:23.625042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:35:05.131 [2024-11-20 18:44:23.625051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:35:05.131 [2024-11-20 18:44:23.625057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:05.131 [2024-11-20 18:44:23.625130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:05.131 [2024-11-20 18:44:23.625140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:35:05.131 [2024-11-20 18:44:23.625146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:35:05.131 [2024-11-20 18:44:23.625152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:05.131 [2024-11-20 18:44:23.625357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:05.131 [2024-11-20 18:44:23.625369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:35:05.131 [2024-11-20 18:44:23.625375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.180 ms 00:35:05.131 [2024-11-20 18:44:23.625381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:05.131 [2024-11-20 18:44:23.625447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:05.131 [2024-11-20 18:44:23.625456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:35:05.131 [2024-11-20 18:44:23.625462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:35:05.131 [2024-11-20 18:44:23.625467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:05.131 [2024-11-20 18:44:23.625485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:05.131 [2024-11-20 18:44:23.625491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:35:05.131 [2024-11-20 18:44:23.625498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:35:05.131 [2024-11-20 18:44:23.625506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:05.131 [2024-11-20 18:44:23.625519] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:35:05.131 [2024-11-20 18:44:23.628732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:05.131 [2024-11-20 18:44:23.628758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:35:05.131 [2024-11-20 18:44:23.628766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.216 ms 00:35:05.131 [2024-11-20 18:44:23.628771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:05.131 [2024-11-20 18:44:23.628799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:05.131 [2024-11-20 18:44:23.628806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:35:05.131 [2024-11-20 18:44:23.628812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:35:05.131 [2024-11-20 18:44:23.628818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:05.131 [2024-11-20 18:44:23.628846] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:35:05.131 [2024-11-20 18:44:23.628865] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:35:05.131 [2024-11-20 18:44:23.628895] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:35:05.131 [2024-11-20 18:44:23.628907] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:35:05.131 [2024-11-20 18:44:23.628990] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:35:05.131 [2024-11-20 18:44:23.628999] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:35:05.131 [2024-11-20 18:44:23.629007] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:35:05.131 [2024-11-20 18:44:23.629015] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:35:05.131 [2024-11-20 18:44:23.629022] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:35:05.131 [2024-11-20 18:44:23.629029] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:35:05.131 [2024-11-20 18:44:23.629037] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:35:05.131 [2024-11-20 18:44:23.629043] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:35:05.131 [2024-11-20 18:44:23.629048] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:35:05.131 [2024-11-20 18:44:23.629054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:05.131 [2024-11-20 18:44:23.629060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:35:05.131 [2024-11-20 18:44:23.629066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.210 ms 00:35:05.131 [2024-11-20 18:44:23.629072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:05.131 [2024-11-20 18:44:23.629146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:05.131 [2024-11-20 18:44:23.629154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:35:05.131 [2024-11-20 18:44:23.629160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:35:05.131 [2024-11-20 18:44:23.629168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:05.131 [2024-11-20 18:44:23.629243] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:35:05.131 [2024-11-20 18:44:23.629252] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:35:05.131 [2024-11-20 18:44:23.629259] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:35:05.131 [2024-11-20 18:44:23.629265] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:05.131 [2024-11-20 18:44:23.629272] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:35:05.131 [2024-11-20 18:44:23.629277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:35:05.131 [2024-11-20 18:44:23.629282] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:35:05.131 [2024-11-20 18:44:23.629290] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:35:05.131 [2024-11-20 18:44:23.629296] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:35:05.131 [2024-11-20 18:44:23.629302] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:35:05.131 [2024-11-20 18:44:23.629307] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:35:05.131 [2024-11-20 18:44:23.629313] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:35:05.131 [2024-11-20 18:44:23.629318] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:35:05.131 [2024-11-20 18:44:23.629323] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:35:05.131 [2024-11-20 18:44:23.629330] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:35:05.131 [2024-11-20 18:44:23.629335] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:05.131 [2024-11-20 18:44:23.629341] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:35:05.131 [2024-11-20 18:44:23.629350] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:35:05.131 [2024-11-20 18:44:23.629356] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:05.131 [2024-11-20 18:44:23.629361] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:35:05.131 [2024-11-20 18:44:23.629366] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:35:05.131 [2024-11-20 18:44:23.629371] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:05.131 [2024-11-20 18:44:23.629376] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:35:05.131 [2024-11-20 18:44:23.629381] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:35:05.131 [2024-11-20 18:44:23.629386] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:05.132 [2024-11-20 18:44:23.629392] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:35:05.132 [2024-11-20 18:44:23.629397] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:35:05.132 [2024-11-20 18:44:23.629402] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:05.132 [2024-11-20 18:44:23.629415] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:35:05.132 [2024-11-20 18:44:23.629421] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:35:05.132 [2024-11-20 18:44:23.629426] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:05.132 [2024-11-20 18:44:23.629431] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:35:05.132 [2024-11-20 18:44:23.629436] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:35:05.132 [2024-11-20 18:44:23.629441] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:35:05.132 [2024-11-20 18:44:23.629447] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:35:05.132 [2024-11-20 18:44:23.629452] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:35:05.132 [2024-11-20 18:44:23.629457] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:35:05.132 [2024-11-20 18:44:23.629462] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:35:05.132 [2024-11-20 18:44:23.629467] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:35:05.132 [2024-11-20 18:44:23.629475] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:05.132 [2024-11-20 18:44:23.629480] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:35:05.132 [2024-11-20 18:44:23.629486] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:35:05.132 [2024-11-20 18:44:23.629492] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:05.132 [2024-11-20 18:44:23.629498] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:35:05.132 [2024-11-20 18:44:23.629504] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:35:05.132 [2024-11-20 18:44:23.629510] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:35:05.132 [2024-11-20 18:44:23.629516] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:05.132 [2024-11-20 18:44:23.629522] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:35:05.132 [2024-11-20 18:44:23.629528] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:35:05.132 [2024-11-20 18:44:23.629533] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:35:05.132 [2024-11-20 18:44:23.629539] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:35:05.132 [2024-11-20 18:44:23.629544] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:35:05.132 [2024-11-20 18:44:23.629549] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:35:05.132 [2024-11-20 18:44:23.629556] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:35:05.132 [2024-11-20 18:44:23.629566] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:05.132 [2024-11-20 18:44:23.629572] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:35:05.132 [2024-11-20 18:44:23.629578] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:35:05.132 [2024-11-20 18:44:23.629583] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:35:05.132 [2024-11-20 18:44:23.629589] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:35:05.132 [2024-11-20 18:44:23.629594] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:35:05.132 [2024-11-20 18:44:23.629600] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:35:05.132 [2024-11-20 18:44:23.629605] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:35:05.132 [2024-11-20 18:44:23.629610] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:35:05.132 [2024-11-20 18:44:23.629616] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:35:05.132 [2024-11-20 18:44:23.629621] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:35:05.132 [2024-11-20 18:44:23.629627] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:35:05.132 [2024-11-20 18:44:23.629633] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:35:05.132 [2024-11-20 18:44:23.629638] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:35:05.132 [2024-11-20 18:44:23.629644] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:35:05.132 [2024-11-20 18:44:23.629650] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:35:05.132 [2024-11-20 18:44:23.629656] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:05.132 [2024-11-20 18:44:23.629665] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:35:05.132 [2024-11-20 18:44:23.629671] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:35:05.132 [2024-11-20 18:44:23.629677] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:35:05.132 [2024-11-20 18:44:23.629684] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:35:05.132 [2024-11-20 18:44:23.629690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:05.132 [2024-11-20 18:44:23.629696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:35:05.132 [2024-11-20 18:44:23.629702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.499 ms 00:35:05.132 [2024-11-20 18:44:23.629708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:05.132 [2024-11-20 18:44:23.650570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:05.132 [2024-11-20 18:44:23.650596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:35:05.132 [2024-11-20 18:44:23.650605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.830 ms 00:35:05.132 [2024-11-20 18:44:23.650611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:05.132 [2024-11-20 18:44:23.650671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:05.132 [2024-11-20 18:44:23.650678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:35:05.132 [2024-11-20 18:44:23.650684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:35:05.132 [2024-11-20 18:44:23.650692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:05.132 [2024-11-20 18:44:23.690660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:05.132 [2024-11-20 18:44:23.690823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:35:05.132 [2024-11-20 18:44:23.690838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.931 ms 00:35:05.132 [2024-11-20 18:44:23.690846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:05.132 [2024-11-20 18:44:23.690881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:05.132 [2024-11-20 18:44:23.690889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:35:05.132 [2024-11-20 18:44:23.690896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:35:05.132 [2024-11-20 18:44:23.690902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:05.132 [2024-11-20 18:44:23.690977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:05.132 [2024-11-20 18:44:23.690986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:35:05.132 [2024-11-20 18:44:23.690994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:35:05.132 [2024-11-20 18:44:23.691001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:05.132 [2024-11-20 18:44:23.691112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:05.132 [2024-11-20 18:44:23.691122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:35:05.132 [2024-11-20 18:44:23.691128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:35:05.132 [2024-11-20 18:44:23.691134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:05.132 [2024-11-20 18:44:23.702899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:05.132 [2024-11-20 18:44:23.702929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:35:05.132 [2024-11-20 18:44:23.702937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.751 ms 00:35:05.132 [2024-11-20 18:44:23.702944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:05.132 [2024-11-20 18:44:23.703034] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:35:05.132 [2024-11-20 18:44:23.703044] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:35:05.132 [2024-11-20 18:44:23.703052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:05.132 [2024-11-20 18:44:23.703058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:35:05.132 [2024-11-20 18:44:23.703067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:35:05.132 [2024-11-20 18:44:23.703073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:05.133 [2024-11-20 18:44:23.713027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:05.133 [2024-11-20 18:44:23.713060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:35:05.133 [2024-11-20 18:44:23.713069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.943 ms 00:35:05.133 [2024-11-20 18:44:23.713074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:05.133 [2024-11-20 18:44:23.713182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:05.133 [2024-11-20 18:44:23.713190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:35:05.133 [2024-11-20 18:44:23.713196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:35:05.133 [2024-11-20 18:44:23.713202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:05.133 [2024-11-20 18:44:23.713238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:05.133 [2024-11-20 18:44:23.713267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:35:05.133 [2024-11-20 18:44:23.713275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:35:05.133 [2024-11-20 18:44:23.713280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:05.133 [2024-11-20 18:44:23.713729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:05.133 [2024-11-20 18:44:23.713744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:35:05.133 [2024-11-20 18:44:23.713751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.414 ms 00:35:05.133 [2024-11-20 18:44:23.713757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:05.133 [2024-11-20 18:44:23.713770] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:35:05.133 [2024-11-20 18:44:23.713779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:05.133 [2024-11-20 18:44:23.713786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:35:05.133 [2024-11-20 18:44:23.713792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:35:05.133 [2024-11-20 18:44:23.713797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:05.133 [2024-11-20 18:44:23.723106] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:35:05.133 [2024-11-20 18:44:23.723209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:05.133 [2024-11-20 18:44:23.723218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:35:05.133 [2024-11-20 18:44:23.723225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.399 ms 00:35:05.133 [2024-11-20 18:44:23.723231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:05.133 [2024-11-20 18:44:23.724829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:05.133 [2024-11-20 18:44:23.724940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:35:05.133 [2024-11-20 18:44:23.724956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.585 ms 00:35:05.133 [2024-11-20 18:44:23.724962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:05.133 [2024-11-20 18:44:23.725045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:05.133 [2024-11-20 18:44:23.725054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:35:05.133 [2024-11-20 18:44:23.725062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:35:05.133 [2024-11-20 18:44:23.725069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:05.133 [2024-11-20 18:44:23.725088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:05.133 [2024-11-20 18:44:23.725110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:35:05.133 [2024-11-20 18:44:23.725120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:35:05.133 [2024-11-20 18:44:23.725126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:05.133 [2024-11-20 18:44:23.725153] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:35:05.133 [2024-11-20 18:44:23.725161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:05.133 [2024-11-20 18:44:23.725168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:35:05.133 [2024-11-20 18:44:23.725174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:35:05.133 [2024-11-20 18:44:23.725180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:05.133 [2024-11-20 18:44:23.744612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:05.133 [2024-11-20 18:44:23.744641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:35:05.133 [2024-11-20 18:44:23.744649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.416 ms 00:35:05.133 [2024-11-20 18:44:23.744656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:05.133 [2024-11-20 18:44:23.744716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:05.133 [2024-11-20 18:44:23.744724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:35:05.133 [2024-11-20 18:44:23.744730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:35:05.133 [2024-11-20 18:44:23.744736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:05.133 [2024-11-20 18:44:23.745603] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 121.096 ms, result 0 00:35:06.522  [2024-11-20T18:44:26.111Z] Copying: 11/1024 [MB] (11 MBps) [2024-11-20T18:44:26.788Z] Copying: 22/1024 [MB] (11 MBps) [2024-11-20T18:44:28.173Z] Copying: 32/1024 [MB] (10 MBps) [2024-11-20T18:44:29.119Z] Copying: 43/1024 [MB] (11 MBps) [2024-11-20T18:44:30.065Z] Copying: 53/1024 [MB] (10 MBps) [2024-11-20T18:44:31.013Z] Copying: 65/1024 [MB] (11 MBps) [2024-11-20T18:44:31.957Z] Copying: 79/1024 [MB] (13 MBps) [2024-11-20T18:44:32.902Z] Copying: 89/1024 [MB] (10 MBps) [2024-11-20T18:44:33.845Z] Copying: 99/1024 [MB] (10 MBps) [2024-11-20T18:44:34.788Z] Copying: 110/1024 [MB] (11 MBps) [2024-11-20T18:44:36.174Z] Copying: 126/1024 [MB] (15 MBps) [2024-11-20T18:44:37.117Z] Copying: 139/1024 [MB] (13 MBps) [2024-11-20T18:44:38.061Z] Copying: 150/1024 [MB] (11 MBps) [2024-11-20T18:44:39.003Z] Copying: 162/1024 [MB] (11 MBps) [2024-11-20T18:44:39.948Z] Copying: 173/1024 [MB] (11 MBps) [2024-11-20T18:44:40.893Z] Copying: 184/1024 [MB] (11 MBps) [2024-11-20T18:44:41.837Z] Copying: 195/1024 [MB] (10 MBps) [2024-11-20T18:44:42.781Z] Copying: 206/1024 [MB] (10 MBps) [2024-11-20T18:44:44.166Z] Copying: 217/1024 [MB] (11 MBps) [2024-11-20T18:44:45.111Z] Copying: 229/1024 [MB] (11 MBps) [2024-11-20T18:44:46.079Z] Copying: 240/1024 [MB] (11 MBps) [2024-11-20T18:44:47.024Z] Copying: 252/1024 [MB] (11 MBps) [2024-11-20T18:44:47.968Z] Copying: 263/1024 [MB] (11 MBps) [2024-11-20T18:44:48.914Z] Copying: 274/1024 [MB] (10 MBps) [2024-11-20T18:44:49.859Z] Copying: 285/1024 [MB] (10 MBps) [2024-11-20T18:44:50.801Z] Copying: 296/1024 [MB] (11 MBps) [2024-11-20T18:44:52.187Z] Copying: 308/1024 [MB] (11 MBps) [2024-11-20T18:44:52.760Z] Copying: 319/1024 [MB] (11 MBps) [2024-11-20T18:44:54.147Z] Copying: 329/1024 [MB] (10 MBps) [2024-11-20T18:44:55.093Z] Copying: 340/1024 [MB] (11 MBps) [2024-11-20T18:44:56.037Z] Copying: 352/1024 [MB] (11 MBps) [2024-11-20T18:44:56.980Z] Copying: 363/1024 [MB] (11 MBps) [2024-11-20T18:44:57.924Z] Copying: 375/1024 [MB] (11 MBps) [2024-11-20T18:44:58.867Z] Copying: 386/1024 [MB] (11 MBps) [2024-11-20T18:44:59.810Z] Copying: 398/1024 [MB] (12 MBps) [2024-11-20T18:45:01.209Z] Copying: 410/1024 [MB] (11 MBps) [2024-11-20T18:45:01.859Z] Copying: 421/1024 [MB] (11 MBps) [2024-11-20T18:45:02.800Z] Copying: 432/1024 [MB] (11 MBps) [2024-11-20T18:45:04.187Z] Copying: 444/1024 [MB] (11 MBps) [2024-11-20T18:45:04.761Z] Copying: 454/1024 [MB] (10 MBps) [2024-11-20T18:45:06.149Z] Copying: 466/1024 [MB] (11 MBps) [2024-11-20T18:45:07.093Z] Copying: 477/1024 [MB] (11 MBps) [2024-11-20T18:45:08.034Z] Copying: 488/1024 [MB] (11 MBps) [2024-11-20T18:45:08.978Z] Copying: 499/1024 [MB] (11 MBps) [2024-11-20T18:45:09.922Z] Copying: 510/1024 [MB] (10 MBps) [2024-11-20T18:45:10.863Z] Copying: 522/1024 [MB] (11 MBps) [2024-11-20T18:45:11.804Z] Copying: 532/1024 [MB] (10 MBps) [2024-11-20T18:45:13.189Z] Copying: 543/1024 [MB] (10 MBps) [2024-11-20T18:45:13.761Z] Copying: 555/1024 [MB] (11 MBps) [2024-11-20T18:45:15.147Z] Copying: 566/1024 [MB] (11 MBps) [2024-11-20T18:45:16.092Z] Copying: 577/1024 [MB] (11 MBps) [2024-11-20T18:45:17.037Z] Copying: 588/1024 [MB] (11 MBps) [2024-11-20T18:45:17.983Z] Copying: 600/1024 [MB] (11 MBps) [2024-11-20T18:45:18.928Z] Copying: 611/1024 [MB] (11 MBps) [2024-11-20T18:45:19.871Z] Copying: 623/1024 [MB] (11 MBps) [2024-11-20T18:45:20.816Z] Copying: 634/1024 [MB] (11 MBps) [2024-11-20T18:45:21.760Z] Copying: 644/1024 [MB] (10 MBps) [2024-11-20T18:45:23.149Z] Copying: 655/1024 [MB] (11 MBps) [2024-11-20T18:45:24.094Z] Copying: 667/1024 [MB] (11 MBps) [2024-11-20T18:45:25.037Z] Copying: 678/1024 [MB] (11 MBps) [2024-11-20T18:45:25.981Z] Copying: 689/1024 [MB] (11 MBps) [2024-11-20T18:45:26.923Z] Copying: 701/1024 [MB] (11 MBps) [2024-11-20T18:45:27.867Z] Copying: 711/1024 [MB] (10 MBps) [2024-11-20T18:45:28.810Z] Copying: 722/1024 [MB] (11 MBps) [2024-11-20T18:45:30.197Z] Copying: 734/1024 [MB] (11 MBps) [2024-11-20T18:45:30.770Z] Copying: 745/1024 [MB] (11 MBps) [2024-11-20T18:45:32.158Z] Copying: 756/1024 [MB] (11 MBps) [2024-11-20T18:45:33.103Z] Copying: 767/1024 [MB] (11 MBps) [2024-11-20T18:45:34.048Z] Copying: 779/1024 [MB] (11 MBps) [2024-11-20T18:45:34.992Z] Copying: 790/1024 [MB] (11 MBps) [2024-11-20T18:45:35.937Z] Copying: 801/1024 [MB] (11 MBps) [2024-11-20T18:45:36.951Z] Copying: 813/1024 [MB] (11 MBps) [2024-11-20T18:45:37.897Z] Copying: 824/1024 [MB] (11 MBps) [2024-11-20T18:45:38.843Z] Copying: 835/1024 [MB] (11 MBps) [2024-11-20T18:45:39.805Z] Copying: 845/1024 [MB] (10 MBps) [2024-11-20T18:45:41.193Z] Copying: 857/1024 [MB] (11 MBps) [2024-11-20T18:45:41.767Z] Copying: 869/1024 [MB] (11 MBps) [2024-11-20T18:45:43.154Z] Copying: 880/1024 [MB] (11 MBps) [2024-11-20T18:45:44.100Z] Copying: 892/1024 [MB] (11 MBps) [2024-11-20T18:45:45.043Z] Copying: 902/1024 [MB] (10 MBps) [2024-11-20T18:45:45.983Z] Copying: 913/1024 [MB] (10 MBps) [2024-11-20T18:45:46.927Z] Copying: 925/1024 [MB] (11 MBps) [2024-11-20T18:45:47.872Z] Copying: 936/1024 [MB] (11 MBps) [2024-11-20T18:45:48.817Z] Copying: 947/1024 [MB] (11 MBps) [2024-11-20T18:45:49.769Z] Copying: 958/1024 [MB] (11 MBps) [2024-11-20T18:45:51.154Z] Copying: 970/1024 [MB] (11 MBps) [2024-11-20T18:45:52.100Z] Copying: 981/1024 [MB] (11 MBps) [2024-11-20T18:45:53.044Z] Copying: 993/1024 [MB] (11 MBps) [2024-11-20T18:45:53.989Z] Copying: 1004/1024 [MB] (11 MBps) [2024-11-20T18:45:54.933Z] Copying: 1015/1024 [MB] (11 MBps) [2024-11-20T18:45:55.507Z] Copying: 1048032/1048576 [kB] (7944 kBps) [2024-11-20T18:45:55.507Z] Copying: 1024/1024 [MB] (average 11 MBps)[2024-11-20 18:45:55.320346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:36.878 [2024-11-20 18:45:55.320441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:36:36.878 [2024-11-20 18:45:55.320462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:36:36.878 [2024-11-20 18:45:55.320473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:36.878 [2024-11-20 18:45:55.322734] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:36:36.878 [2024-11-20 18:45:55.328708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:36.878 [2024-11-20 18:45:55.328860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:36:36.878 [2024-11-20 18:45:55.328943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.853 ms 00:36:36.878 [2024-11-20 18:45:55.328971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:36.878 [2024-11-20 18:45:55.340275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:36.878 [2024-11-20 18:45:55.340452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:36:36.878 [2024-11-20 18:45:55.340526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.341 ms 00:36:36.878 [2024-11-20 18:45:55.340552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:36.878 [2024-11-20 18:45:55.340600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:36.878 [2024-11-20 18:45:55.340623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:36:36.878 [2024-11-20 18:45:55.340646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:36:36.878 [2024-11-20 18:45:55.340666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:36.878 [2024-11-20 18:45:55.340745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:36.878 [2024-11-20 18:45:55.340940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:36:36.878 [2024-11-20 18:45:55.340973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:36:36.878 [2024-11-20 18:45:55.340993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:36.878 [2024-11-20 18:45:55.341024] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:36:36.878 [2024-11-20 18:45:55.341051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 126976 / 261120 wr_cnt: 1 state: open 00:36:36.878 [2024-11-20 18:45:55.341083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:36:36.878 [2024-11-20 18:45:55.341142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:36:36.878 [2024-11-20 18:45:55.341199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:36:36.878 [2024-11-20 18:45:55.341233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:36:36.878 [2024-11-20 18:45:55.341287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:36:36.878 [2024-11-20 18:45:55.341402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:36:36.878 [2024-11-20 18:45:55.341440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:36:36.878 [2024-11-20 18:45:55.341472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:36:36.878 [2024-11-20 18:45:55.341501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:36:36.878 [2024-11-20 18:45:55.341532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:36:36.878 [2024-11-20 18:45:55.341564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:36:36.878 [2024-11-20 18:45:55.341595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:36:36.878 [2024-11-20 18:45:55.341677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:36:36.878 [2024-11-20 18:45:55.341711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:36:36.878 [2024-11-20 18:45:55.341742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:36:36.878 [2024-11-20 18:45:55.341773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:36:36.878 [2024-11-20 18:45:55.341802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:36:36.878 [2024-11-20 18:45:55.341833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:36:36.878 [2024-11-20 18:45:55.341864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:36:36.878 [2024-11-20 18:45:55.341893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:36:36.878 [2024-11-20 18:45:55.341987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:36:36.878 [2024-11-20 18:45:55.342019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:36:36.878 [2024-11-20 18:45:55.342051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:36:36.878 [2024-11-20 18:45:55.342144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:36:36.878 [2024-11-20 18:45:55.342178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:36:36.878 [2024-11-20 18:45:55.342209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:36:36.878 [2024-11-20 18:45:55.342269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:36:36.878 [2024-11-20 18:45:55.342303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:36:36.878 [2024-11-20 18:45:55.342332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:36:36.878 [2024-11-20 18:45:55.342551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:36:36.878 [2024-11-20 18:45:55.342585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:36:36.878 [2024-11-20 18:45:55.342615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:36:36.878 [2024-11-20 18:45:55.342647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:36:36.878 [2024-11-20 18:45:55.342676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:36:36.878 [2024-11-20 18:45:55.342707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:36:36.878 [2024-11-20 18:45:55.342737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:36:36.878 [2024-11-20 18:45:55.342971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:36:36.878 [2024-11-20 18:45:55.343005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:36:36.878 [2024-11-20 18:45:55.343034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:36:36.878 [2024-11-20 18:45:55.343064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:36:36.878 [2024-11-20 18:45:55.343109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:36:36.878 [2024-11-20 18:45:55.343140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:36:36.878 [2024-11-20 18:45:55.343172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:36:36.878 [2024-11-20 18:45:55.343201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:36:36.878 [2024-11-20 18:45:55.343231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:36:36.878 [2024-11-20 18:45:55.343259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:36:36.878 [2024-11-20 18:45:55.343304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:36:36.878 [2024-11-20 18:45:55.343335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:36:36.878 [2024-11-20 18:45:55.343364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:36:36.878 [2024-11-20 18:45:55.343394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:36:36.878 [2024-11-20 18:45:55.343425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:36:36.878 [2024-11-20 18:45:55.343502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:36:36.878 [2024-11-20 18:45:55.343537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:36:36.878 [2024-11-20 18:45:55.343568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:36:36.878 [2024-11-20 18:45:55.343597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:36:36.878 [2024-11-20 18:45:55.343669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:36:36.878 [2024-11-20 18:45:55.343700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:36:36.878 [2024-11-20 18:45:55.343729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:36:36.878 [2024-11-20 18:45:55.343761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:36:36.879 [2024-11-20 18:45:55.343815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:36:36.879 [2024-11-20 18:45:55.343845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:36:36.879 [2024-11-20 18:45:55.343875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:36:36.879 [2024-11-20 18:45:55.343906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:36:36.879 [2024-11-20 18:45:55.343934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:36:36.879 [2024-11-20 18:45:55.343965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:36:36.879 [2024-11-20 18:45:55.344030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:36:36.879 [2024-11-20 18:45:55.344065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:36:36.879 [2024-11-20 18:45:55.344118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:36:36.879 [2024-11-20 18:45:55.344150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:36:36.879 [2024-11-20 18:45:55.344182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:36:36.879 [2024-11-20 18:45:55.344214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:36:36.879 [2024-11-20 18:45:55.344242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:36:36.879 [2024-11-20 18:45:55.344305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:36:36.879 [2024-11-20 18:45:55.344337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:36:36.879 [2024-11-20 18:45:55.344368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:36:36.879 [2024-11-20 18:45:55.344397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:36:36.879 [2024-11-20 18:45:55.344456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:36:36.879 [2024-11-20 18:45:55.344490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:36:36.879 [2024-11-20 18:45:55.344520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:36:36.879 [2024-11-20 18:45:55.344551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:36:36.879 [2024-11-20 18:45:55.344913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:36:36.879 [2024-11-20 18:45:55.344932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:36:36.879 [2024-11-20 18:45:55.344941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:36:36.879 [2024-11-20 18:45:55.344949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:36:36.879 [2024-11-20 18:45:55.344958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:36:36.879 [2024-11-20 18:45:55.344967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:36:36.879 [2024-11-20 18:45:55.344974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:36:36.879 [2024-11-20 18:45:55.344984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:36:36.879 [2024-11-20 18:45:55.344992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:36:36.879 [2024-11-20 18:45:55.344999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:36:36.879 [2024-11-20 18:45:55.345007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:36:36.879 [2024-11-20 18:45:55.345015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:36:36.879 [2024-11-20 18:45:55.345023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:36:36.879 [2024-11-20 18:45:55.345033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:36:36.879 [2024-11-20 18:45:55.345042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:36:36.879 [2024-11-20 18:45:55.345050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:36:36.879 [2024-11-20 18:45:55.345057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:36:36.879 [2024-11-20 18:45:55.345065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:36:36.879 [2024-11-20 18:45:55.345075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:36:36.879 [2024-11-20 18:45:55.345111] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:36:36.879 [2024-11-20 18:45:55.345122] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ceec07e5-4dff-4c79-a04f-9021d06f00b5 00:36:36.879 [2024-11-20 18:45:55.345131] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 126976 00:36:36.879 [2024-11-20 18:45:55.345141] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 127008 00:36:36.879 [2024-11-20 18:45:55.345151] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 126976 00:36:36.879 [2024-11-20 18:45:55.345174] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0003 00:36:36.879 [2024-11-20 18:45:55.345184] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:36:36.879 [2024-11-20 18:45:55.345193] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:36:36.879 [2024-11-20 18:45:55.345210] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:36:36.879 [2024-11-20 18:45:55.345217] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:36:36.879 [2024-11-20 18:45:55.345225] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:36:36.879 [2024-11-20 18:45:55.345234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:36.879 [2024-11-20 18:45:55.345246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:36:36.879 [2024-11-20 18:45:55.345255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.211 ms 00:36:36.879 [2024-11-20 18:45:55.345265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:36.879 [2024-11-20 18:45:55.360090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:36.879 [2024-11-20 18:45:55.360143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:36:36.879 [2024-11-20 18:45:55.360157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.798 ms 00:36:36.879 [2024-11-20 18:45:55.360172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:36.879 [2024-11-20 18:45:55.360612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:36.879 [2024-11-20 18:45:55.360662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:36:36.879 [2024-11-20 18:45:55.360672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.400 ms 00:36:36.879 [2024-11-20 18:45:55.360681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:36.879 [2024-11-20 18:45:55.400292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:36.879 [2024-11-20 18:45:55.400337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:36:36.879 [2024-11-20 18:45:55.400355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:36.879 [2024-11-20 18:45:55.400364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:36.879 [2024-11-20 18:45:55.400425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:36.879 [2024-11-20 18:45:55.400434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:36:36.879 [2024-11-20 18:45:55.400443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:36.879 [2024-11-20 18:45:55.400451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:36.879 [2024-11-20 18:45:55.400506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:36.879 [2024-11-20 18:45:55.400518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:36:36.879 [2024-11-20 18:45:55.400527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:36.879 [2024-11-20 18:45:55.400540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:36.879 [2024-11-20 18:45:55.400557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:36.879 [2024-11-20 18:45:55.400567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:36:36.879 [2024-11-20 18:45:55.400576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:36.879 [2024-11-20 18:45:55.400584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:36.879 [2024-11-20 18:45:55.491514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:36.879 [2024-11-20 18:45:55.491792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:36:36.879 [2024-11-20 18:45:55.491822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:36.879 [2024-11-20 18:45:55.491831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:37.141 [2024-11-20 18:45:55.566499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:37.141 [2024-11-20 18:45:55.566759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:36:37.141 [2024-11-20 18:45:55.566790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:37.141 [2024-11-20 18:45:55.566800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:37.141 [2024-11-20 18:45:55.566921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:37.141 [2024-11-20 18:45:55.566934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:36:37.141 [2024-11-20 18:45:55.566946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:37.141 [2024-11-20 18:45:55.566955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:37.141 [2024-11-20 18:45:55.567002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:37.141 [2024-11-20 18:45:55.567012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:36:37.141 [2024-11-20 18:45:55.567021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:37.141 [2024-11-20 18:45:55.567029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:37.141 [2024-11-20 18:45:55.567157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:37.141 [2024-11-20 18:45:55.567170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:36:37.141 [2024-11-20 18:45:55.567180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:37.141 [2024-11-20 18:45:55.567189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:37.141 [2024-11-20 18:45:55.567222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:37.141 [2024-11-20 18:45:55.567233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:36:37.141 [2024-11-20 18:45:55.567243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:37.141 [2024-11-20 18:45:55.567252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:37.141 [2024-11-20 18:45:55.567302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:37.141 [2024-11-20 18:45:55.567315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:36:37.141 [2024-11-20 18:45:55.567324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:37.141 [2024-11-20 18:45:55.567335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:37.141 [2024-11-20 18:45:55.567399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:37.141 [2024-11-20 18:45:55.567411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:36:37.141 [2024-11-20 18:45:55.567421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:37.141 [2024-11-20 18:45:55.567430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:37.141 [2024-11-20 18:45:55.567591] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 248.532 ms, result 0 00:36:38.527 00:36:38.527 00:36:38.527 18:45:57 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:36:38.527 [2024-11-20 18:45:57.080796] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:36:38.527 [2024-11-20 18:45:57.080944] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87033 ] 00:36:38.788 [2024-11-20 18:45:57.243434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:38.788 [2024-11-20 18:45:57.384164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:39.362 [2024-11-20 18:45:57.713079] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:36:39.362 [2024-11-20 18:45:57.713217] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:36:39.362 [2024-11-20 18:45:57.878023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:39.362 [2024-11-20 18:45:57.878089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:36:39.362 [2024-11-20 18:45:57.878133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:36:39.362 [2024-11-20 18:45:57.878143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:39.362 [2024-11-20 18:45:57.878206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:39.362 [2024-11-20 18:45:57.878218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:36:39.362 [2024-11-20 18:45:57.878230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:36:39.362 [2024-11-20 18:45:57.878239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:39.362 [2024-11-20 18:45:57.878262] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:36:39.362 [2024-11-20 18:45:57.880529] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:36:39.362 [2024-11-20 18:45:57.880823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:39.362 [2024-11-20 18:45:57.880848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:36:39.362 [2024-11-20 18:45:57.880859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.565 ms 00:36:39.362 [2024-11-20 18:45:57.880869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:39.362 [2024-11-20 18:45:57.881407] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:36:39.362 [2024-11-20 18:45:57.881451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:39.362 [2024-11-20 18:45:57.881462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:36:39.362 [2024-11-20 18:45:57.881476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:36:39.362 [2024-11-20 18:45:57.881485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:39.362 [2024-11-20 18:45:57.881591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:39.362 [2024-11-20 18:45:57.881602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:36:39.362 [2024-11-20 18:45:57.881612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:36:39.362 [2024-11-20 18:45:57.881620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:39.362 [2024-11-20 18:45:57.881901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:39.362 [2024-11-20 18:45:57.881916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:36:39.362 [2024-11-20 18:45:57.881925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.246 ms 00:36:39.362 [2024-11-20 18:45:57.881934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:39.362 [2024-11-20 18:45:57.882011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:39.362 [2024-11-20 18:45:57.882020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:36:39.362 [2024-11-20 18:45:57.882029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:36:39.362 [2024-11-20 18:45:57.882037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:39.362 [2024-11-20 18:45:57.882081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:39.362 [2024-11-20 18:45:57.882090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:36:39.362 [2024-11-20 18:45:57.882124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:36:39.362 [2024-11-20 18:45:57.882136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:39.362 [2024-11-20 18:45:57.882161] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:36:39.362 [2024-11-20 18:45:57.887067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:39.362 [2024-11-20 18:45:57.887141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:36:39.362 [2024-11-20 18:45:57.887162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.913 ms 00:36:39.362 [2024-11-20 18:45:57.887176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:39.362 [2024-11-20 18:45:57.887240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:39.362 [2024-11-20 18:45:57.887250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:36:39.362 [2024-11-20 18:45:57.887260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:36:39.362 [2024-11-20 18:45:57.887277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:39.362 [2024-11-20 18:45:57.887321] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:36:39.362 [2024-11-20 18:45:57.887356] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:36:39.362 [2024-11-20 18:45:57.887399] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:36:39.362 [2024-11-20 18:45:57.887418] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:36:39.362 [2024-11-20 18:45:57.887538] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:36:39.363 [2024-11-20 18:45:57.887552] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:36:39.363 [2024-11-20 18:45:57.887565] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:36:39.363 [2024-11-20 18:45:57.887577] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:36:39.363 [2024-11-20 18:45:57.887586] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:36:39.363 [2024-11-20 18:45:57.887595] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:36:39.363 [2024-11-20 18:45:57.887606] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:36:39.363 [2024-11-20 18:45:57.887614] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:36:39.363 [2024-11-20 18:45:57.887622] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:36:39.363 [2024-11-20 18:45:57.887631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:39.363 [2024-11-20 18:45:57.887639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:36:39.363 [2024-11-20 18:45:57.887648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.313 ms 00:36:39.363 [2024-11-20 18:45:57.887656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:39.363 [2024-11-20 18:45:57.887744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:39.363 [2024-11-20 18:45:57.887756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:36:39.363 [2024-11-20 18:45:57.887765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:36:39.363 [2024-11-20 18:45:57.887776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:39.363 [2024-11-20 18:45:57.887881] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:36:39.363 [2024-11-20 18:45:57.887894] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:36:39.363 [2024-11-20 18:45:57.887904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:36:39.363 [2024-11-20 18:45:57.887913] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:39.363 [2024-11-20 18:45:57.887922] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:36:39.363 [2024-11-20 18:45:57.887934] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:36:39.363 [2024-11-20 18:45:57.887942] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:36:39.363 [2024-11-20 18:45:57.887951] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:36:39.363 [2024-11-20 18:45:57.887958] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:36:39.363 [2024-11-20 18:45:57.887966] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:36:39.363 [2024-11-20 18:45:57.887977] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:36:39.363 [2024-11-20 18:45:57.887986] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:36:39.363 [2024-11-20 18:45:57.887994] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:36:39.363 [2024-11-20 18:45:57.888002] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:36:39.363 [2024-11-20 18:45:57.888009] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:36:39.363 [2024-11-20 18:45:57.888017] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:39.363 [2024-11-20 18:45:57.888024] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:36:39.363 [2024-11-20 18:45:57.888037] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:36:39.363 [2024-11-20 18:45:57.888044] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:39.363 [2024-11-20 18:45:57.888050] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:36:39.363 [2024-11-20 18:45:57.888059] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:36:39.363 [2024-11-20 18:45:57.888065] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:36:39.363 [2024-11-20 18:45:57.888072] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:36:39.363 [2024-11-20 18:45:57.888078] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:36:39.363 [2024-11-20 18:45:57.888084] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:36:39.363 [2024-11-20 18:45:57.888107] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:36:39.363 [2024-11-20 18:45:57.888116] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:36:39.363 [2024-11-20 18:45:57.888122] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:36:39.363 [2024-11-20 18:45:57.888129] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:36:39.363 [2024-11-20 18:45:57.888136] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:36:39.363 [2024-11-20 18:45:57.888143] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:36:39.363 [2024-11-20 18:45:57.888150] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:36:39.363 [2024-11-20 18:45:57.888159] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:36:39.363 [2024-11-20 18:45:57.888166] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:36:39.363 [2024-11-20 18:45:57.888174] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:36:39.363 [2024-11-20 18:45:57.888180] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:36:39.363 [2024-11-20 18:45:57.888186] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:36:39.363 [2024-11-20 18:45:57.888193] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:36:39.363 [2024-11-20 18:45:57.888201] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:36:39.363 [2024-11-20 18:45:57.888208] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:39.363 [2024-11-20 18:45:57.888214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:36:39.363 [2024-11-20 18:45:57.888221] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:36:39.363 [2024-11-20 18:45:57.888230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:39.363 [2024-11-20 18:45:57.888238] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:36:39.363 [2024-11-20 18:45:57.888248] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:36:39.363 [2024-11-20 18:45:57.888257] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:36:39.363 [2024-11-20 18:45:57.888265] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:39.363 [2024-11-20 18:45:57.888274] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:36:39.363 [2024-11-20 18:45:57.888282] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:36:39.363 [2024-11-20 18:45:57.888289] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:36:39.363 [2024-11-20 18:45:57.888297] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:36:39.363 [2024-11-20 18:45:57.888304] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:36:39.363 [2024-11-20 18:45:57.888310] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:36:39.363 [2024-11-20 18:45:57.888322] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:36:39.363 [2024-11-20 18:45:57.888337] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:36:39.363 [2024-11-20 18:45:57.888346] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:36:39.363 [2024-11-20 18:45:57.888354] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:36:39.363 [2024-11-20 18:45:57.888361] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:36:39.363 [2024-11-20 18:45:57.888369] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:36:39.363 [2024-11-20 18:45:57.888376] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:36:39.363 [2024-11-20 18:45:57.888384] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:36:39.363 [2024-11-20 18:45:57.888392] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:36:39.363 [2024-11-20 18:45:57.888400] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:36:39.363 [2024-11-20 18:45:57.888410] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:36:39.363 [2024-11-20 18:45:57.888418] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:36:39.363 [2024-11-20 18:45:57.888425] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:36:39.363 [2024-11-20 18:45:57.888433] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:36:39.363 [2024-11-20 18:45:57.888440] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:36:39.363 [2024-11-20 18:45:57.888449] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:36:39.363 [2024-11-20 18:45:57.888456] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:36:39.363 [2024-11-20 18:45:57.888465] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:36:39.363 [2024-11-20 18:45:57.888475] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:36:39.363 [2024-11-20 18:45:57.888483] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:36:39.363 [2024-11-20 18:45:57.888490] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:36:39.363 [2024-11-20 18:45:57.888502] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:36:39.363 [2024-11-20 18:45:57.888510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:39.363 [2024-11-20 18:45:57.888518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:36:39.363 [2024-11-20 18:45:57.888526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.699 ms 00:36:39.363 [2024-11-20 18:45:57.888533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:39.363 [2024-11-20 18:45:57.920490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:39.363 [2024-11-20 18:45:57.920533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:36:39.364 [2024-11-20 18:45:57.920546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.913 ms 00:36:39.364 [2024-11-20 18:45:57.920555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:39.364 [2024-11-20 18:45:57.920639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:39.364 [2024-11-20 18:45:57.920648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:36:39.364 [2024-11-20 18:45:57.920657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:36:39.364 [2024-11-20 18:45:57.920670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:39.364 [2024-11-20 18:45:57.970997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:39.364 [2024-11-20 18:45:57.971051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:36:39.364 [2024-11-20 18:45:57.971066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.271 ms 00:36:39.364 [2024-11-20 18:45:57.971076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:39.364 [2024-11-20 18:45:57.971146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:39.364 [2024-11-20 18:45:57.971159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:36:39.364 [2024-11-20 18:45:57.971170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:36:39.364 [2024-11-20 18:45:57.971179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:39.364 [2024-11-20 18:45:57.971314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:39.364 [2024-11-20 18:45:57.971328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:36:39.364 [2024-11-20 18:45:57.971339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:36:39.364 [2024-11-20 18:45:57.971348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:39.364 [2024-11-20 18:45:57.971491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:39.364 [2024-11-20 18:45:57.971507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:36:39.364 [2024-11-20 18:45:57.971516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:36:39.364 [2024-11-20 18:45:57.971526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:39.626 [2024-11-20 18:45:57.989699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:39.626 [2024-11-20 18:45:57.989972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:36:39.626 [2024-11-20 18:45:57.989993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.151 ms 00:36:39.626 [2024-11-20 18:45:57.990003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:39.626 [2024-11-20 18:45:57.990185] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:36:39.626 [2024-11-20 18:45:57.990202] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:36:39.626 [2024-11-20 18:45:57.990214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:39.626 [2024-11-20 18:45:57.990228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:36:39.626 [2024-11-20 18:45:57.990238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:36:39.626 [2024-11-20 18:45:57.990246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:39.626 [2024-11-20 18:45:58.002544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:39.626 [2024-11-20 18:45:58.002586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:36:39.626 [2024-11-20 18:45:58.002598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.280 ms 00:36:39.626 [2024-11-20 18:45:58.002608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:39.626 [2024-11-20 18:45:58.002745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:39.626 [2024-11-20 18:45:58.002754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:36:39.626 [2024-11-20 18:45:58.002763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:36:39.626 [2024-11-20 18:45:58.002777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:39.626 [2024-11-20 18:45:58.002829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:39.626 [2024-11-20 18:45:58.002841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:36:39.626 [2024-11-20 18:45:58.002851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:36:39.626 [2024-11-20 18:45:58.002859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:39.626 [2024-11-20 18:45:58.003503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:39.626 [2024-11-20 18:45:58.003521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:36:39.626 [2024-11-20 18:45:58.003531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.603 ms 00:36:39.626 [2024-11-20 18:45:58.003539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:39.626 [2024-11-20 18:45:58.003557] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:36:39.626 [2024-11-20 18:45:58.003573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:39.626 [2024-11-20 18:45:58.003582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:36:39.626 [2024-11-20 18:45:58.003592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:36:39.626 [2024-11-20 18:45:58.003599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:39.626 [2024-11-20 18:45:58.017777] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:36:39.626 [2024-11-20 18:45:58.017947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:39.626 [2024-11-20 18:45:58.017960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:36:39.626 [2024-11-20 18:45:58.017971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.329 ms 00:36:39.626 [2024-11-20 18:45:58.017979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:39.626 [2024-11-20 18:45:58.020257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:39.626 [2024-11-20 18:45:58.020437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:36:39.626 [2024-11-20 18:45:58.020457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.254 ms 00:36:39.626 [2024-11-20 18:45:58.020465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:39.627 [2024-11-20 18:45:58.020552] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:36:39.627 [2024-11-20 18:45:58.021015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:39.627 [2024-11-20 18:45:58.021038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:36:39.627 [2024-11-20 18:45:58.021049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.487 ms 00:36:39.627 [2024-11-20 18:45:58.021058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:39.627 [2024-11-20 18:45:58.021088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:39.627 [2024-11-20 18:45:58.021125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:36:39.627 [2024-11-20 18:45:58.021134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:36:39.627 [2024-11-20 18:45:58.021144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:39.627 [2024-11-20 18:45:58.021212] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:36:39.627 [2024-11-20 18:45:58.021223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:39.627 [2024-11-20 18:45:58.021231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:36:39.627 [2024-11-20 18:45:58.021240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:36:39.627 [2024-11-20 18:45:58.021248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:39.627 [2024-11-20 18:45:58.049527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:39.627 [2024-11-20 18:45:58.049576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:36:39.627 [2024-11-20 18:45:58.049591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.256 ms 00:36:39.627 [2024-11-20 18:45:58.049599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:39.627 [2024-11-20 18:45:58.049695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:39.627 [2024-11-20 18:45:58.049705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:36:39.627 [2024-11-20 18:45:58.049715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:36:39.627 [2024-11-20 18:45:58.049725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:39.627 [2024-11-20 18:45:58.051203] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 172.625 ms, result 0 00:36:41.015  [2024-11-20T18:46:00.590Z] Copying: 10/1024 [MB] (10 MBps) [2024-11-20T18:46:01.538Z] Copying: 20/1024 [MB] (10 MBps) [2024-11-20T18:46:02.483Z] Copying: 43/1024 [MB] (22 MBps) [2024-11-20T18:46:03.428Z] Copying: 56/1024 [MB] (12 MBps) [2024-11-20T18:46:04.375Z] Copying: 68/1024 [MB] (12 MBps) [2024-11-20T18:46:05.321Z] Copying: 79/1024 [MB] (10 MBps) [2024-11-20T18:46:06.264Z] Copying: 95/1024 [MB] (16 MBps) [2024-11-20T18:46:07.650Z] Copying: 112/1024 [MB] (16 MBps) [2024-11-20T18:46:08.592Z] Copying: 130/1024 [MB] (17 MBps) [2024-11-20T18:46:09.534Z] Copying: 149/1024 [MB] (19 MBps) [2024-11-20T18:46:10.518Z] Copying: 168/1024 [MB] (18 MBps) [2024-11-20T18:46:11.492Z] Copying: 184/1024 [MB] (16 MBps) [2024-11-20T18:46:12.437Z] Copying: 203/1024 [MB] (18 MBps) [2024-11-20T18:46:13.383Z] Copying: 221/1024 [MB] (17 MBps) [2024-11-20T18:46:14.329Z] Copying: 243/1024 [MB] (22 MBps) [2024-11-20T18:46:15.275Z] Copying: 256/1024 [MB] (12 MBps) [2024-11-20T18:46:16.664Z] Copying: 267/1024 [MB] (10 MBps) [2024-11-20T18:46:17.609Z] Copying: 279/1024 [MB] (11 MBps) [2024-11-20T18:46:18.553Z] Copying: 296/1024 [MB] (17 MBps) [2024-11-20T18:46:19.496Z] Copying: 306/1024 [MB] (10 MBps) [2024-11-20T18:46:20.442Z] Copying: 318/1024 [MB] (11 MBps) [2024-11-20T18:46:21.386Z] Copying: 328/1024 [MB] (10 MBps) [2024-11-20T18:46:22.330Z] Copying: 340/1024 [MB] (11 MBps) [2024-11-20T18:46:23.275Z] Copying: 351/1024 [MB] (11 MBps) [2024-11-20T18:46:24.660Z] Copying: 362/1024 [MB] (11 MBps) [2024-11-20T18:46:25.603Z] Copying: 374/1024 [MB] (11 MBps) [2024-11-20T18:46:26.546Z] Copying: 389/1024 [MB] (15 MBps) [2024-11-20T18:46:27.490Z] Copying: 401/1024 [MB] (11 MBps) [2024-11-20T18:46:28.434Z] Copying: 412/1024 [MB] (11 MBps) [2024-11-20T18:46:29.379Z] Copying: 424/1024 [MB] (11 MBps) [2024-11-20T18:46:30.323Z] Copying: 434/1024 [MB] (10 MBps) [2024-11-20T18:46:31.267Z] Copying: 445/1024 [MB] (11 MBps) [2024-11-20T18:46:32.656Z] Copying: 456/1024 [MB] (10 MBps) [2024-11-20T18:46:33.601Z] Copying: 468/1024 [MB] (11 MBps) [2024-11-20T18:46:34.548Z] Copying: 478/1024 [MB] (10 MBps) [2024-11-20T18:46:35.496Z] Copying: 489/1024 [MB] (10 MBps) [2024-11-20T18:46:36.442Z] Copying: 500/1024 [MB] (11 MBps) [2024-11-20T18:46:37.387Z] Copying: 511/1024 [MB] (11 MBps) [2024-11-20T18:46:38.332Z] Copying: 523/1024 [MB] (11 MBps) [2024-11-20T18:46:39.278Z] Copying: 533/1024 [MB] (10 MBps) [2024-11-20T18:46:40.667Z] Copying: 544/1024 [MB] (10 MBps) [2024-11-20T18:46:41.665Z] Copying: 555/1024 [MB] (10 MBps) [2024-11-20T18:46:42.609Z] Copying: 566/1024 [MB] (10 MBps) [2024-11-20T18:46:43.549Z] Copying: 577/1024 [MB] (10 MBps) [2024-11-20T18:46:44.491Z] Copying: 593/1024 [MB] (16 MBps) [2024-11-20T18:46:45.431Z] Copying: 606/1024 [MB] (12 MBps) [2024-11-20T18:46:46.375Z] Copying: 629/1024 [MB] (22 MBps) [2024-11-20T18:46:47.319Z] Copying: 643/1024 [MB] (14 MBps) [2024-11-20T18:46:48.702Z] Copying: 659/1024 [MB] (16 MBps) [2024-11-20T18:46:49.276Z] Copying: 676/1024 [MB] (16 MBps) [2024-11-20T18:46:50.663Z] Copying: 699/1024 [MB] (23 MBps) [2024-11-20T18:46:51.606Z] Copying: 718/1024 [MB] (18 MBps) [2024-11-20T18:46:52.548Z] Copying: 739/1024 [MB] (21 MBps) [2024-11-20T18:46:53.487Z] Copying: 763/1024 [MB] (24 MBps) [2024-11-20T18:46:54.431Z] Copying: 787/1024 [MB] (23 MBps) [2024-11-20T18:46:55.376Z] Copying: 809/1024 [MB] (22 MBps) [2024-11-20T18:46:56.321Z] Copying: 828/1024 [MB] (19 MBps) [2024-11-20T18:46:57.267Z] Copying: 844/1024 [MB] (16 MBps) [2024-11-20T18:46:58.657Z] Copying: 856/1024 [MB] (11 MBps) [2024-11-20T18:46:59.602Z] Copying: 872/1024 [MB] (16 MBps) [2024-11-20T18:47:00.545Z] Copying: 883/1024 [MB] (11 MBps) [2024-11-20T18:47:01.488Z] Copying: 894/1024 [MB] (10 MBps) [2024-11-20T18:47:02.429Z] Copying: 906/1024 [MB] (11 MBps) [2024-11-20T18:47:03.373Z] Copying: 919/1024 [MB] (13 MBps) [2024-11-20T18:47:04.317Z] Copying: 931/1024 [MB] (11 MBps) [2024-11-20T18:47:05.703Z] Copying: 943/1024 [MB] (12 MBps) [2024-11-20T18:47:06.275Z] Copying: 966/1024 [MB] (22 MBps) [2024-11-20T18:47:07.660Z] Copying: 980/1024 [MB] (14 MBps) [2024-11-20T18:47:08.605Z] Copying: 992/1024 [MB] (11 MBps) [2024-11-20T18:47:09.549Z] Copying: 1003/1024 [MB] (11 MBps) [2024-11-20T18:47:10.122Z] Copying: 1014/1024 [MB] (11 MBps) [2024-11-20T18:47:10.384Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-11-20 18:47:10.158932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:51.755 [2024-11-20 18:47:10.159251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:37:51.755 [2024-11-20 18:47:10.159371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:37:51.755 [2024-11-20 18:47:10.159408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:51.755 [2024-11-20 18:47:10.159466] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:37:51.755 [2024-11-20 18:47:10.163873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:51.755 [2024-11-20 18:47:10.164056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:37:51.755 [2024-11-20 18:47:10.164153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.294 ms 00:37:51.755 [2024-11-20 18:47:10.164185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:51.755 [2024-11-20 18:47:10.164510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:51.755 [2024-11-20 18:47:10.164545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:37:51.755 [2024-11-20 18:47:10.164573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:37:51.755 [2024-11-20 18:47:10.164655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:51.755 [2024-11-20 18:47:10.164712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:51.755 [2024-11-20 18:47:10.164741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:37:51.755 [2024-11-20 18:47:10.164769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:37:51.755 [2024-11-20 18:47:10.164794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:51.755 [2024-11-20 18:47:10.164882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:51.755 [2024-11-20 18:47:10.165193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:37:51.755 [2024-11-20 18:47:10.165231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:37:51.755 [2024-11-20 18:47:10.165242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:51.755 [2024-11-20 18:47:10.165272] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:37:51.755 [2024-11-20 18:47:10.165289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:37:51.755 [2024-11-20 18:47:10.165303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:37:51.755 [2024-11-20 18:47:10.165314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:37:51.755 [2024-11-20 18:47:10.165325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:37:51.756 [2024-11-20 18:47:10.165336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:37:51.756 [2024-11-20 18:47:10.165346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:37:51.756 [2024-11-20 18:47:10.165356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:37:51.756 [2024-11-20 18:47:10.165366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:37:51.756 [2024-11-20 18:47:10.165375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:37:51.756 [2024-11-20 18:47:10.165385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:37:51.756 [2024-11-20 18:47:10.165395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:37:51.756 [2024-11-20 18:47:10.165406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:37:51.756 [2024-11-20 18:47:10.165415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:37:51.756 [2024-11-20 18:47:10.165425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:37:51.756 [2024-11-20 18:47:10.165435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:37:51.756 [2024-11-20 18:47:10.165445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:37:51.756 [2024-11-20 18:47:10.165456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:37:51.756 [2024-11-20 18:47:10.165465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:37:51.756 [2024-11-20 18:47:10.165475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:37:51.756 [2024-11-20 18:47:10.165484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:37:51.756 [2024-11-20 18:47:10.165494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:37:51.756 [2024-11-20 18:47:10.165503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:37:51.756 [2024-11-20 18:47:10.165513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:37:51.756 [2024-11-20 18:47:10.165523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:37:51.756 [2024-11-20 18:47:10.165533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:37:51.756 [2024-11-20 18:47:10.165542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:37:51.756 [2024-11-20 18:47:10.165551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:37:51.756 [2024-11-20 18:47:10.165562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:37:51.756 [2024-11-20 18:47:10.165571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:37:51.756 [2024-11-20 18:47:10.165581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:37:51.756 [2024-11-20 18:47:10.165592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:37:51.756 [2024-11-20 18:47:10.165602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:37:51.756 [2024-11-20 18:47:10.165612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:37:51.756 [2024-11-20 18:47:10.165621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:37:51.756 [2024-11-20 18:47:10.165631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:37:51.756 [2024-11-20 18:47:10.165641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:37:51.756 [2024-11-20 18:47:10.165651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:37:51.756 [2024-11-20 18:47:10.165660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:37:51.756 [2024-11-20 18:47:10.165670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:37:51.756 [2024-11-20 18:47:10.165680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:37:51.756 [2024-11-20 18:47:10.165689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:37:51.756 [2024-11-20 18:47:10.165699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:37:51.756 [2024-11-20 18:47:10.165708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:37:51.756 [2024-11-20 18:47:10.165717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:37:51.756 [2024-11-20 18:47:10.165727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:37:51.756 [2024-11-20 18:47:10.165737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:37:51.756 [2024-11-20 18:47:10.165747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:37:51.756 [2024-11-20 18:47:10.165771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:37:51.756 [2024-11-20 18:47:10.165781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:37:51.756 [2024-11-20 18:47:10.165790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:37:51.756 [2024-11-20 18:47:10.165801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:37:51.756 [2024-11-20 18:47:10.165810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:37:51.756 [2024-11-20 18:47:10.165820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:37:51.756 [2024-11-20 18:47:10.165830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:37:51.756 [2024-11-20 18:47:10.165839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:37:51.756 [2024-11-20 18:47:10.165849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:37:51.756 [2024-11-20 18:47:10.165860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:37:51.756 [2024-11-20 18:47:10.165871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:37:51.756 [2024-11-20 18:47:10.165880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:37:51.756 [2024-11-20 18:47:10.165890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:37:51.756 [2024-11-20 18:47:10.165899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:37:51.756 [2024-11-20 18:47:10.165909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:37:51.756 [2024-11-20 18:47:10.165919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:37:51.756 [2024-11-20 18:47:10.165930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:37:51.756 [2024-11-20 18:47:10.165940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:37:51.756 [2024-11-20 18:47:10.165950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:37:51.756 [2024-11-20 18:47:10.165960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:37:51.756 [2024-11-20 18:47:10.165972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:37:51.756 [2024-11-20 18:47:10.165982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:37:51.756 [2024-11-20 18:47:10.165992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:37:51.756 [2024-11-20 18:47:10.166001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:37:51.756 [2024-11-20 18:47:10.166011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:37:51.756 [2024-11-20 18:47:10.166020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:37:51.756 [2024-11-20 18:47:10.166030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:37:51.756 [2024-11-20 18:47:10.166039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:37:51.756 [2024-11-20 18:47:10.166049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:37:51.756 [2024-11-20 18:47:10.166059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:37:51.756 [2024-11-20 18:47:10.166069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:37:51.756 [2024-11-20 18:47:10.166079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:37:51.756 [2024-11-20 18:47:10.166089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:37:51.756 [2024-11-20 18:47:10.166110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:37:51.756 [2024-11-20 18:47:10.166120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:37:51.756 [2024-11-20 18:47:10.166130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:37:51.756 [2024-11-20 18:47:10.166140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:37:51.756 [2024-11-20 18:47:10.166151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:37:51.756 [2024-11-20 18:47:10.166161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:37:51.756 [2024-11-20 18:47:10.166171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:37:51.756 [2024-11-20 18:47:10.166180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:37:51.756 [2024-11-20 18:47:10.166190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:37:51.756 [2024-11-20 18:47:10.166199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:37:51.756 [2024-11-20 18:47:10.166210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:37:51.756 [2024-11-20 18:47:10.166219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:37:51.756 [2024-11-20 18:47:10.166228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:37:51.757 [2024-11-20 18:47:10.166238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:37:51.757 [2024-11-20 18:47:10.166248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:37:51.757 [2024-11-20 18:47:10.166259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:37:51.757 [2024-11-20 18:47:10.166268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:37:51.757 [2024-11-20 18:47:10.166279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:37:51.757 [2024-11-20 18:47:10.166289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:37:51.757 [2024-11-20 18:47:10.166299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:37:51.757 [2024-11-20 18:47:10.166319] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:37:51.757 [2024-11-20 18:47:10.166330] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ceec07e5-4dff-4c79-a04f-9021d06f00b5 00:37:51.757 [2024-11-20 18:47:10.166341] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:37:51.757 [2024-11-20 18:47:10.166350] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 4128 00:37:51.757 [2024-11-20 18:47:10.166359] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 4096 00:37:51.757 [2024-11-20 18:47:10.166369] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0078 00:37:51.757 [2024-11-20 18:47:10.166378] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:37:51.757 [2024-11-20 18:47:10.166391] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:37:51.757 [2024-11-20 18:47:10.166401] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:37:51.757 [2024-11-20 18:47:10.166409] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:37:51.757 [2024-11-20 18:47:10.166417] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:37:51.757 [2024-11-20 18:47:10.166427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:51.757 [2024-11-20 18:47:10.166436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:37:51.757 [2024-11-20 18:47:10.166447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.156 ms 00:37:51.757 [2024-11-20 18:47:10.166455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:51.757 [2024-11-20 18:47:10.180960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:51.757 [2024-11-20 18:47:10.181193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:37:51.757 [2024-11-20 18:47:10.181294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.484 ms 00:37:51.757 [2024-11-20 18:47:10.181326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:51.757 [2024-11-20 18:47:10.181768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:51.757 [2024-11-20 18:47:10.181934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:37:51.757 [2024-11-20 18:47:10.181963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.370 ms 00:37:51.757 [2024-11-20 18:47:10.181983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:51.757 [2024-11-20 18:47:10.218972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:51.757 [2024-11-20 18:47:10.219181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:37:51.757 [2024-11-20 18:47:10.219251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:51.757 [2024-11-20 18:47:10.219278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:51.757 [2024-11-20 18:47:10.219367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:51.757 [2024-11-20 18:47:10.219390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:37:51.757 [2024-11-20 18:47:10.219410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:51.757 [2024-11-20 18:47:10.219429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:51.757 [2024-11-20 18:47:10.219506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:51.757 [2024-11-20 18:47:10.219583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:37:51.757 [2024-11-20 18:47:10.219616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:51.757 [2024-11-20 18:47:10.219635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:51.757 [2024-11-20 18:47:10.219664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:51.757 [2024-11-20 18:47:10.219685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:37:51.757 [2024-11-20 18:47:10.219706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:51.757 [2024-11-20 18:47:10.219725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:51.757 [2024-11-20 18:47:10.305312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:51.757 [2024-11-20 18:47:10.305545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:37:51.757 [2024-11-20 18:47:10.305616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:51.757 [2024-11-20 18:47:10.305641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:51.757 [2024-11-20 18:47:10.374926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:51.757 [2024-11-20 18:47:10.375158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:37:51.757 [2024-11-20 18:47:10.375182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:51.757 [2024-11-20 18:47:10.375192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:51.757 [2024-11-20 18:47:10.375298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:51.757 [2024-11-20 18:47:10.375310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:37:51.757 [2024-11-20 18:47:10.375320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:51.757 [2024-11-20 18:47:10.375333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:51.757 [2024-11-20 18:47:10.375382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:51.757 [2024-11-20 18:47:10.375392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:37:51.757 [2024-11-20 18:47:10.375402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:51.757 [2024-11-20 18:47:10.375410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:51.757 [2024-11-20 18:47:10.375498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:51.757 [2024-11-20 18:47:10.375509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:37:51.757 [2024-11-20 18:47:10.375517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:51.757 [2024-11-20 18:47:10.375526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:51.757 [2024-11-20 18:47:10.375557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:51.757 [2024-11-20 18:47:10.375568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:37:51.757 [2024-11-20 18:47:10.375576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:51.757 [2024-11-20 18:47:10.375584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:51.757 [2024-11-20 18:47:10.375628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:51.757 [2024-11-20 18:47:10.375638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:37:51.757 [2024-11-20 18:47:10.375647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:51.757 [2024-11-20 18:47:10.375656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:51.757 [2024-11-20 18:47:10.375712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:51.757 [2024-11-20 18:47:10.375723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:37:51.757 [2024-11-20 18:47:10.375732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:51.757 [2024-11-20 18:47:10.375740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:51.757 [2024-11-20 18:47:10.375883] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 216.917 ms, result 0 00:37:52.699 00:37:52.699 00:37:52.699 18:47:11 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:37:55.329 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:37:55.329 18:47:13 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:37:55.329 18:47:13 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:37:55.329 18:47:13 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:37:55.329 18:47:13 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:37:55.329 18:47:13 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:37:55.329 Process with pid 84107 is not found 00:37:55.329 18:47:13 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 84107 00:37:55.329 18:47:13 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 84107 ']' 00:37:55.329 18:47:13 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 84107 00:37:55.329 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (84107) - No such process 00:37:55.329 18:47:13 ftl.ftl_restore_fast -- common/autotest_common.sh@981 -- # echo 'Process with pid 84107 is not found' 00:37:55.329 18:47:13 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:37:55.329 Remove shared memory files 00:37:55.329 18:47:13 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:37:55.329 18:47:13 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:37:55.329 18:47:13 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_ceec07e5-4dff-4c79-a04f-9021d06f00b5_band_md /dev/hugepages/ftl_ceec07e5-4dff-4c79-a04f-9021d06f00b5_l2p_l1 /dev/hugepages/ftl_ceec07e5-4dff-4c79-a04f-9021d06f00b5_l2p_l2 /dev/hugepages/ftl_ceec07e5-4dff-4c79-a04f-9021d06f00b5_l2p_l2_ctx /dev/hugepages/ftl_ceec07e5-4dff-4c79-a04f-9021d06f00b5_nvc_md /dev/hugepages/ftl_ceec07e5-4dff-4c79-a04f-9021d06f00b5_p2l_pool /dev/hugepages/ftl_ceec07e5-4dff-4c79-a04f-9021d06f00b5_sb /dev/hugepages/ftl_ceec07e5-4dff-4c79-a04f-9021d06f00b5_sb_shm /dev/hugepages/ftl_ceec07e5-4dff-4c79-a04f-9021d06f00b5_trim_bitmap /dev/hugepages/ftl_ceec07e5-4dff-4c79-a04f-9021d06f00b5_trim_log /dev/hugepages/ftl_ceec07e5-4dff-4c79-a04f-9021d06f00b5_trim_md /dev/hugepages/ftl_ceec07e5-4dff-4c79-a04f-9021d06f00b5_vmap 00:37:55.329 18:47:13 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:37:55.329 18:47:13 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:37:55.329 18:47:13 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:37:55.329 ************************************ 00:37:55.329 END TEST ftl_restore_fast 00:37:55.329 ************************************ 00:37:55.329 00:37:55.329 real 6m5.476s 00:37:55.329 user 5m54.015s 00:37:55.329 sys 0m11.146s 00:37:55.329 18:47:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:55.329 18:47:13 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:37:55.329 18:47:13 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:37:55.329 18:47:13 ftl -- ftl/ftl.sh@14 -- # killprocess 74919 00:37:55.329 18:47:13 ftl -- common/autotest_common.sh@954 -- # '[' -z 74919 ']' 00:37:55.329 Process with pid 74919 is not found 00:37:55.329 18:47:13 ftl -- common/autotest_common.sh@958 -- # kill -0 74919 00:37:55.329 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (74919) - No such process 00:37:55.329 18:47:13 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 74919 is not found' 00:37:55.329 18:47:13 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:37:55.329 18:47:13 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=87814 00:37:55.329 18:47:13 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:37:55.329 18:47:13 ftl -- ftl/ftl.sh@20 -- # waitforlisten 87814 00:37:55.329 18:47:13 ftl -- common/autotest_common.sh@835 -- # '[' -z 87814 ']' 00:37:55.329 18:47:13 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:55.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:55.329 18:47:13 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:55.329 18:47:13 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:55.329 18:47:13 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:55.329 18:47:13 ftl -- common/autotest_common.sh@10 -- # set +x 00:37:55.329 [2024-11-20 18:47:13.655903] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:37:55.329 [2024-11-20 18:47:13.656742] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87814 ] 00:37:55.329 [2024-11-20 18:47:13.822870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:55.329 [2024-11-20 18:47:13.943508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:56.272 18:47:14 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:56.272 18:47:14 ftl -- common/autotest_common.sh@868 -- # return 0 00:37:56.272 18:47:14 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:37:56.532 nvme0n1 00:37:56.532 18:47:14 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:37:56.532 18:47:14 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:37:56.532 18:47:14 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:37:56.533 18:47:15 ftl -- ftl/common.sh@28 -- # stores=f60eab26-a097-4227-8a6c-629b7b0aedf7 00:37:56.533 18:47:15 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:37:56.533 18:47:15 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f60eab26-a097-4227-8a6c-629b7b0aedf7 00:37:56.795 18:47:15 ftl -- ftl/ftl.sh@23 -- # killprocess 87814 00:37:56.795 18:47:15 ftl -- common/autotest_common.sh@954 -- # '[' -z 87814 ']' 00:37:56.795 18:47:15 ftl -- common/autotest_common.sh@958 -- # kill -0 87814 00:37:56.795 18:47:15 ftl -- common/autotest_common.sh@959 -- # uname 00:37:56.795 18:47:15 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:56.795 18:47:15 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87814 00:37:56.795 killing process with pid 87814 00:37:56.795 18:47:15 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:56.795 18:47:15 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:56.795 18:47:15 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87814' 00:37:56.795 18:47:15 ftl -- common/autotest_common.sh@973 -- # kill 87814 00:37:56.795 18:47:15 ftl -- common/autotest_common.sh@978 -- # wait 87814 00:37:58.178 18:47:16 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:37:58.439 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:37:58.439 Waiting for block devices as requested 00:37:58.439 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:37:58.701 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:37:58.701 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:37:58.701 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:38:03.990 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:38:03.990 18:47:22 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:38:03.990 Remove shared memory files 00:38:03.990 18:47:22 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:38:03.990 18:47:22 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:38:03.990 18:47:22 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:38:03.990 18:47:22 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:38:03.990 18:47:22 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:38:03.990 18:47:22 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:38:03.990 ************************************ 00:38:03.990 END TEST ftl 00:38:03.990 ************************************ 00:38:03.990 00:38:03.990 real 19m49.305s 00:38:03.990 user 21m33.528s 00:38:03.990 sys 1m37.357s 00:38:03.990 18:47:22 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:03.990 18:47:22 ftl -- common/autotest_common.sh@10 -- # set +x 00:38:03.990 18:47:22 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:38:03.990 18:47:22 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:38:03.990 18:47:22 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:38:03.990 18:47:22 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:38:03.990 18:47:22 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:38:03.990 18:47:22 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:38:03.990 18:47:22 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:38:03.990 18:47:22 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:38:03.990 18:47:22 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:38:03.990 18:47:22 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:38:03.990 18:47:22 -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:03.990 18:47:22 -- common/autotest_common.sh@10 -- # set +x 00:38:03.990 18:47:22 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:38:03.990 18:47:22 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:38:03.990 18:47:22 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:38:03.990 18:47:22 -- common/autotest_common.sh@10 -- # set +x 00:38:05.377 INFO: APP EXITING 00:38:05.377 INFO: killing all VMs 00:38:05.377 INFO: killing vhost app 00:38:05.377 INFO: EXIT DONE 00:38:05.638 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:38:05.899 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:38:06.161 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:38:06.161 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:38:06.161 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:38:06.423 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:38:06.684 Cleaning 00:38:06.684 Removing: /var/run/dpdk/spdk0/config 00:38:06.684 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:38:06.684 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:38:06.684 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:38:06.684 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:38:06.684 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:38:06.684 Removing: /var/run/dpdk/spdk0/hugepage_info 00:38:06.684 Removing: /var/run/dpdk/spdk0 00:38:06.684 Removing: /var/run/dpdk/spdk_pid56898 00:38:06.684 Removing: /var/run/dpdk/spdk_pid57100 00:38:06.684 Removing: /var/run/dpdk/spdk_pid57307 00:38:06.945 Removing: /var/run/dpdk/spdk_pid57400 00:38:06.945 Removing: /var/run/dpdk/spdk_pid57440 00:38:06.945 Removing: /var/run/dpdk/spdk_pid57557 00:38:06.945 Removing: /var/run/dpdk/spdk_pid57575 00:38:06.945 Removing: /var/run/dpdk/spdk_pid57763 00:38:06.945 Removing: /var/run/dpdk/spdk_pid57856 00:38:06.945 Removing: /var/run/dpdk/spdk_pid57953 00:38:06.945 Removing: /var/run/dpdk/spdk_pid58058 00:38:06.945 Removing: /var/run/dpdk/spdk_pid58150 00:38:06.945 Removing: /var/run/dpdk/spdk_pid58189 00:38:06.945 Removing: /var/run/dpdk/spdk_pid58226 00:38:06.945 Removing: /var/run/dpdk/spdk_pid58296 00:38:06.945 Removing: /var/run/dpdk/spdk_pid58375 00:38:06.945 Removing: /var/run/dpdk/spdk_pid58800 00:38:06.945 Removing: /var/run/dpdk/spdk_pid58864 00:38:06.945 Removing: /var/run/dpdk/spdk_pid58916 00:38:06.945 Removing: /var/run/dpdk/spdk_pid58932 00:38:06.946 Removing: /var/run/dpdk/spdk_pid59023 00:38:06.946 Removing: /var/run/dpdk/spdk_pid59039 00:38:06.946 Removing: /var/run/dpdk/spdk_pid59130 00:38:06.946 Removing: /var/run/dpdk/spdk_pid59146 00:38:06.946 Removing: /var/run/dpdk/spdk_pid59199 00:38:06.946 Removing: /var/run/dpdk/spdk_pid59217 00:38:06.946 Removing: /var/run/dpdk/spdk_pid59270 00:38:06.946 Removing: /var/run/dpdk/spdk_pid59288 00:38:06.946 Removing: /var/run/dpdk/spdk_pid59437 00:38:06.946 Removing: /var/run/dpdk/spdk_pid59474 00:38:06.946 Removing: /var/run/dpdk/spdk_pid59557 00:38:06.946 Removing: /var/run/dpdk/spdk_pid59729 00:38:06.946 Removing: /var/run/dpdk/spdk_pid59808 00:38:06.946 Removing: /var/run/dpdk/spdk_pid59850 00:38:06.946 Removing: /var/run/dpdk/spdk_pid60274 00:38:06.946 Removing: /var/run/dpdk/spdk_pid60372 00:38:06.946 Removing: /var/run/dpdk/spdk_pid60481 00:38:06.946 Removing: /var/run/dpdk/spdk_pid60534 00:38:06.946 Removing: /var/run/dpdk/spdk_pid60554 00:38:06.946 Removing: /var/run/dpdk/spdk_pid60638 00:38:06.946 Removing: /var/run/dpdk/spdk_pid61258 00:38:06.946 Removing: /var/run/dpdk/spdk_pid61295 00:38:06.946 Removing: /var/run/dpdk/spdk_pid61757 00:38:06.946 Removing: /var/run/dpdk/spdk_pid61855 00:38:06.946 Removing: /var/run/dpdk/spdk_pid61964 00:38:06.946 Removing: /var/run/dpdk/spdk_pid62017 00:38:06.946 Removing: /var/run/dpdk/spdk_pid62037 00:38:06.946 Removing: /var/run/dpdk/spdk_pid62068 00:38:06.946 Removing: /var/run/dpdk/spdk_pid63912 00:38:06.946 Removing: /var/run/dpdk/spdk_pid64038 00:38:06.946 Removing: /var/run/dpdk/spdk_pid64048 00:38:06.946 Removing: /var/run/dpdk/spdk_pid64065 00:38:06.946 Removing: /var/run/dpdk/spdk_pid64107 00:38:06.946 Removing: /var/run/dpdk/spdk_pid64111 00:38:06.946 Removing: /var/run/dpdk/spdk_pid64123 00:38:06.946 Removing: /var/run/dpdk/spdk_pid64169 00:38:06.946 Removing: /var/run/dpdk/spdk_pid64173 00:38:06.946 Removing: /var/run/dpdk/spdk_pid64185 00:38:06.946 Removing: /var/run/dpdk/spdk_pid64230 00:38:06.946 Removing: /var/run/dpdk/spdk_pid64234 00:38:06.946 Removing: /var/run/dpdk/spdk_pid64246 00:38:06.946 Removing: /var/run/dpdk/spdk_pid65643 00:38:06.946 Removing: /var/run/dpdk/spdk_pid65740 00:38:06.946 Removing: /var/run/dpdk/spdk_pid67153 00:38:06.946 Removing: /var/run/dpdk/spdk_pid68923 00:38:06.946 Removing: /var/run/dpdk/spdk_pid68996 00:38:06.946 Removing: /var/run/dpdk/spdk_pid69074 00:38:06.946 Removing: /var/run/dpdk/spdk_pid69178 00:38:06.946 Removing: /var/run/dpdk/spdk_pid69275 00:38:06.946 Removing: /var/run/dpdk/spdk_pid69371 00:38:06.946 Removing: /var/run/dpdk/spdk_pid69445 00:38:06.946 Removing: /var/run/dpdk/spdk_pid69520 00:38:06.946 Removing: /var/run/dpdk/spdk_pid69624 00:38:06.946 Removing: /var/run/dpdk/spdk_pid69716 00:38:06.946 Removing: /var/run/dpdk/spdk_pid69812 00:38:06.946 Removing: /var/run/dpdk/spdk_pid69886 00:38:06.946 Removing: /var/run/dpdk/spdk_pid69962 00:38:06.946 Removing: /var/run/dpdk/spdk_pid70065 00:38:06.946 Removing: /var/run/dpdk/spdk_pid70158 00:38:06.946 Removing: /var/run/dpdk/spdk_pid70255 00:38:06.946 Removing: /var/run/dpdk/spdk_pid70330 00:38:06.946 Removing: /var/run/dpdk/spdk_pid70405 00:38:06.946 Removing: /var/run/dpdk/spdk_pid70508 00:38:06.946 Removing: /var/run/dpdk/spdk_pid70601 00:38:06.946 Removing: /var/run/dpdk/spdk_pid70697 00:38:06.946 Removing: /var/run/dpdk/spdk_pid70765 00:38:06.946 Removing: /var/run/dpdk/spdk_pid70840 00:38:06.946 Removing: /var/run/dpdk/spdk_pid70914 00:38:06.946 Removing: /var/run/dpdk/spdk_pid70988 00:38:06.946 Removing: /var/run/dpdk/spdk_pid71098 00:38:06.946 Removing: /var/run/dpdk/spdk_pid71183 00:38:06.946 Removing: /var/run/dpdk/spdk_pid71278 00:38:06.946 Removing: /var/run/dpdk/spdk_pid71352 00:38:06.946 Removing: /var/run/dpdk/spdk_pid71426 00:38:06.946 Removing: /var/run/dpdk/spdk_pid71507 00:38:06.946 Removing: /var/run/dpdk/spdk_pid71577 00:38:06.946 Removing: /var/run/dpdk/spdk_pid71679 00:38:06.946 Removing: /var/run/dpdk/spdk_pid71774 00:38:06.946 Removing: /var/run/dpdk/spdk_pid71919 00:38:06.946 Removing: /var/run/dpdk/spdk_pid72202 00:38:06.946 Removing: /var/run/dpdk/spdk_pid72234 00:38:06.946 Removing: /var/run/dpdk/spdk_pid72686 00:38:06.946 Removing: /var/run/dpdk/spdk_pid72873 00:38:06.946 Removing: /var/run/dpdk/spdk_pid72975 00:38:06.946 Removing: /var/run/dpdk/spdk_pid73085 00:38:06.946 Removing: /var/run/dpdk/spdk_pid73130 00:38:06.946 Removing: /var/run/dpdk/spdk_pid73161 00:38:06.946 Removing: /var/run/dpdk/spdk_pid73456 00:38:06.946 Removing: /var/run/dpdk/spdk_pid73511 00:38:07.208 Removing: /var/run/dpdk/spdk_pid73582 00:38:07.208 Removing: /var/run/dpdk/spdk_pid73977 00:38:07.208 Removing: /var/run/dpdk/spdk_pid74119 00:38:07.208 Removing: /var/run/dpdk/spdk_pid74919 00:38:07.208 Removing: /var/run/dpdk/spdk_pid75057 00:38:07.208 Removing: /var/run/dpdk/spdk_pid75215 00:38:07.208 Removing: /var/run/dpdk/spdk_pid75303 00:38:07.208 Removing: /var/run/dpdk/spdk_pid75628 00:38:07.208 Removing: /var/run/dpdk/spdk_pid75891 00:38:07.208 Removing: /var/run/dpdk/spdk_pid76239 00:38:07.208 Removing: /var/run/dpdk/spdk_pid76421 00:38:07.208 Removing: /var/run/dpdk/spdk_pid76540 00:38:07.208 Removing: /var/run/dpdk/spdk_pid76593 00:38:07.208 Removing: /var/run/dpdk/spdk_pid76747 00:38:07.208 Removing: /var/run/dpdk/spdk_pid76783 00:38:07.208 Removing: /var/run/dpdk/spdk_pid76830 00:38:07.208 Removing: /var/run/dpdk/spdk_pid77063 00:38:07.208 Removing: /var/run/dpdk/spdk_pid77290 00:38:07.208 Removing: /var/run/dpdk/spdk_pid78103 00:38:07.208 Removing: /var/run/dpdk/spdk_pid78847 00:38:07.208 Removing: /var/run/dpdk/spdk_pid79520 00:38:07.208 Removing: /var/run/dpdk/spdk_pid80441 00:38:07.208 Removing: /var/run/dpdk/spdk_pid80580 00:38:07.208 Removing: /var/run/dpdk/spdk_pid80667 00:38:07.208 Removing: /var/run/dpdk/spdk_pid81044 00:38:07.208 Removing: /var/run/dpdk/spdk_pid81101 00:38:07.208 Removing: /var/run/dpdk/spdk_pid81826 00:38:07.208 Removing: /var/run/dpdk/spdk_pid82315 00:38:07.208 Removing: /var/run/dpdk/spdk_pid83091 00:38:07.208 Removing: /var/run/dpdk/spdk_pid83213 00:38:07.208 Removing: /var/run/dpdk/spdk_pid83255 00:38:07.208 Removing: /var/run/dpdk/spdk_pid83319 00:38:07.208 Removing: /var/run/dpdk/spdk_pid83374 00:38:07.208 Removing: /var/run/dpdk/spdk_pid83439 00:38:07.208 Removing: /var/run/dpdk/spdk_pid83623 00:38:07.208 Removing: /var/run/dpdk/spdk_pid83704 00:38:07.208 Removing: /var/run/dpdk/spdk_pid83771 00:38:07.208 Removing: /var/run/dpdk/spdk_pid83828 00:38:07.208 Removing: /var/run/dpdk/spdk_pid83857 00:38:07.208 Removing: /var/run/dpdk/spdk_pid83942 00:38:07.208 Removing: /var/run/dpdk/spdk_pid84107 00:38:07.208 Removing: /var/run/dpdk/spdk_pid84336 00:38:07.208 Removing: /var/run/dpdk/spdk_pid85172 00:38:07.208 Removing: /var/run/dpdk/spdk_pid86102 00:38:07.208 Removing: /var/run/dpdk/spdk_pid87033 00:38:07.208 Removing: /var/run/dpdk/spdk_pid87814 00:38:07.208 Clean 00:38:07.208 18:47:25 -- common/autotest_common.sh@1453 -- # return 0 00:38:07.208 18:47:25 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:38:07.208 18:47:25 -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:07.208 18:47:25 -- common/autotest_common.sh@10 -- # set +x 00:38:07.208 18:47:25 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:38:07.208 18:47:25 -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:07.208 18:47:25 -- common/autotest_common.sh@10 -- # set +x 00:38:07.208 18:47:25 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:38:07.208 18:47:25 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:38:07.208 18:47:25 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:38:07.208 18:47:25 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:38:07.470 18:47:25 -- spdk/autotest.sh@398 -- # hostname 00:38:07.470 18:47:25 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:38:07.470 geninfo: WARNING: invalid characters removed from testname! 00:38:34.053 18:47:51 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:38:35.966 18:47:54 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:38:39.272 18:47:57 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:38:41.819 18:48:00 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:38:44.365 18:48:02 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:38:46.907 18:48:05 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:38:49.456 18:48:07 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:38:49.456 18:48:07 -- spdk/autorun.sh@1 -- $ timing_finish 00:38:49.456 18:48:07 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:38:49.456 18:48:07 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:38:49.456 18:48:07 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:38:49.456 18:48:07 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:38:49.456 + [[ -n 5027 ]] 00:38:49.456 + sudo kill 5027 00:38:49.466 [Pipeline] } 00:38:49.482 [Pipeline] // timeout 00:38:49.488 [Pipeline] } 00:38:49.504 [Pipeline] // stage 00:38:49.512 [Pipeline] } 00:38:49.528 [Pipeline] // catchError 00:38:49.540 [Pipeline] stage 00:38:49.542 [Pipeline] { (Stop VM) 00:38:49.557 [Pipeline] sh 00:38:49.845 + vagrant halt 00:38:52.420 ==> default: Halting domain... 00:38:55.805 [Pipeline] sh 00:38:56.087 + vagrant destroy -f 00:38:58.627 ==> default: Removing domain... 00:38:59.212 [Pipeline] sh 00:38:59.500 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:38:59.511 [Pipeline] } 00:38:59.529 [Pipeline] // stage 00:38:59.536 [Pipeline] } 00:38:59.552 [Pipeline] // dir 00:38:59.558 [Pipeline] } 00:38:59.573 [Pipeline] // wrap 00:38:59.580 [Pipeline] } 00:38:59.592 [Pipeline] // catchError 00:38:59.602 [Pipeline] stage 00:38:59.604 [Pipeline] { (Epilogue) 00:38:59.619 [Pipeline] sh 00:38:59.904 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:39:05.193 [Pipeline] catchError 00:39:05.196 [Pipeline] { 00:39:05.210 [Pipeline] sh 00:39:05.497 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:39:05.497 Artifacts sizes are good 00:39:05.508 [Pipeline] } 00:39:05.523 [Pipeline] // catchError 00:39:05.534 [Pipeline] archiveArtifacts 00:39:05.542 Archiving artifacts 00:39:05.650 [Pipeline] cleanWs 00:39:05.664 [WS-CLEANUP] Deleting project workspace... 00:39:05.664 [WS-CLEANUP] Deferred wipeout is used... 00:39:05.672 [WS-CLEANUP] done 00:39:05.673 [Pipeline] } 00:39:05.688 [Pipeline] // stage 00:39:05.693 [Pipeline] } 00:39:05.706 [Pipeline] // node 00:39:05.712 [Pipeline] End of Pipeline 00:39:05.752 Finished: SUCCESS